repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
spherolearn/sphero | 252307384 | Title: Suggestions for sphero (Sphero, Inc.)
Question:
username_0: - which on today's mobile devices and compueter is basically limitless.
CORRECTION
- which on today's mobile devices and computer is basically limitless.
---
## EDU APP
- Be able to EDIT code, without connecting.
Answers:
username_0: https://sphero.docsapp.io/docs/sensors
username_0: Colour broken links on this above
username_0: From iPhone |
covid-19-relief/covid-relief-frontend | 585639255 | Title: List mapping issue
Question:
username_0: List is not able to map over the data being passed down from Home. I entered some dummy data in Home, passed it as props, and console logged this.props.funds in List--so far so good, it shows an array of objects just like we want. but when we try to map over it, we get the "Cannot read property map of undefined" error.<issue_closed>
Status: Issue closed |
maciejzasada/grunt-gae | 43970742 | Title: app.yaml
Question:
username_0: A program can exist out of multiple modules `default/app.yaml` `mod1/my_module1.yaml` `mod2/my_module2.yaml` not sure if option.path support this?
Answers:
username_1: It does. At least it works for me with `path: 'app.yaml another-module.yaml'`.
Status: Issue closed
|
linkedpipes/etl | 955726892 | Title: Add pipeline model
Question:
username_0: We need to define pipeline model so we can load all the needed data from RDF and store them. As of now we just load what is needed and do not touch the rest. This lead to dangling resources like unused edges. The original idea was to allow user to provide any data as a part of the pipeline, but that was never used. |
bluesky/bluesky-mpl | 532790212 | Title: In the Image class, give the colorbar and image axes "names"
Question:
username_0: I refer to whatever "name" shows up when you click the matplotlib canvas toolbar settings button and asks you to choose an axis. It currently says "anonymous" and the Python object id.
@tacaswell -- Any hints? |
home-assistant/core | 644853361 | Title: GPSD incorrectly reports "2D Fix" instead of "3D Fix"
Question:
username_0: <!-- READ THIS FIRST:
- If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/
- Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/core/releases
- Do not report issues for integrations if you are using custom components or integrations.
- Provide as many details as possible. Paste logs, configuration samples and code into the backticks.
DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment.
-->
## The problem
<!--
Describe the issue you are experiencing here to communicate to the
maintainers. Tell us what you were trying to do and what happened.
-->
GPS sensor is receiving data from localhost gpsd but it shows value as "2D Fix" instead of "3D Fix" as reported by cgps.
## Environment
<!--
Provide details about the versions you are using, which helps us to reproduce
and find the issue quicker. Version information is found in the
Home Assistant frontend: Developer tools -> Info.
-->
- Home Assistant Core release with the issue:
- Last working Home Assistant Core release (if known):
- Operating environment (OS/Container/Supervised/Core):
- Integration causing this issue:
- Link to integration documentation on our website:
Home Assistant 0.111.2 on RPi 3B+
## Problem-relevant `configuration.yaml`
<!--
An example configuration that caused the problem for you. Fill this out even
if it seems unimportant to you. Please be sure to remove personal information
like passwords, private URLs and other credentials.
-->
```
sensor:
- platform: gpsd
```
## Traceback/Error logs
<!--
If you come across any trace or error logs, please provide them.
-->
```
None
```
## Additional information
<img width="1037" alt="Screen Shot 2020-06-24 at 2 50 29 PM" src="https://user-images.githubusercontent.com/44688061/85616776-34ddf880-b62c-11ea-8767-097f93bba013.png">
<img width="560" alt="Screen Shot 2020-06-24 at 2 49 52 PM" src="https://user-images.githubusercontent.com/44688061/85616795-3a3b4300-b62c-11ea-981b-60a3ba9c7fc9.png"> |
fineemb/Smartmi-smart-heater | 564423764 | Title: 智米电暖器无法显示
Question:
username_0: 你好,我按照你的方法,把 miheater 文件夹放进了 custom_components,在配置文件里面也加入了
`climate:
- platform: miheater
host: 192.168.50.15
token: 7<PASSWORD>
name: 取暖器
`
为啥还是没有显示实体呢? token 这些都是对的。
Answers:
username_0: Thu Feb 13 2020 11:55:39 GMT+0800 (CST)
miheater: Error on device update!
Traceback (most recent call last):
File "/config/custom_components/miheater/climate.py", line 203, in async_update
power = self._device.send('get_prop', ['power'])[0]
File "/usr/local/lib/python3.7/site-packages/miio/device.py", line 291, in send
raise DeviceError(error)
miio.exceptions.DeviceError: {'code': -5001, 'message': 'command error'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 304, in _async_add_entity
await entity.async_device_update(warning=False)
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 459, in async_device_update
await self.async_update()
File "/usr/local/lib/python3.7/asyncio/coroutines.py", line 120, in coro
res = func(*args, **kw)
File "/config/custom_components/miheater/climate.py", line 233, in async_update
except DeviceException:
NameError: name 'DeviceException' is not defined
这个是 hassio 的日志报告
username_1: 取暖器的型号是什么
username_0: 智米电暖气智能版1S
Status: Issue closed
username_1: 这个型号不支持,或者协议有区别.
需要自己抓包后确认一下 |
fabric8-ui/fabric8-ux | 264008225 | Title: Wireframe: Add missions to the wizard
Question:
username_0: Consider how the Launch "mission" could be added into the OSIO wizard flow.
Verification criteria:
* Use cases and user flows captured
* Wireframes for the wizard are updated with mission capabilities
* Shared with UXD stakeholders, <NAME>, Joshua for review
Answers:
username_1: Requirements document: https://docs.google.com/a/redhat.com/document/d/1KFsrUlNzEaiUBdbCqgcD-WE_9Dw6_PHFzHG1sX71bhA/edit?usp=sharing
username_1: @username_0 Here are some of the initial user flows that feature a mission first, technology first, and a technology and mission same time selection:
https://redhat.invisionapp.com/share/W6ED9TBCE
Any feedback would be appreciated.
username_1: @username_0 A first pass of wireframes and updated user flow have been added to the Invision document as well
https://redhat.invisionapp.com/share/WREDX4Z67
username_1: @username_0 Here's the most recent flow and wireframes for the wizard
https://redhat.invisionapp.com/share/BEEH9TWT2
username_1: @username_0 Here are the most recent wireframes:
https://redhat.invisionapp.com/share/BEEH9TWT2
Status: Issue closed
|
ipaddress-gem/ipaddress | 222182682 | Title: derive host address from network
Question:
username_0: given a network and a host portion, is it possible for IPAddress to generate the full host address in that network? for example, if i've got an `IPAddress('1.1.1.0/24')`, can i ask it for the full address of a host whose host bits are `123` (i.e. `1.1.1.123`)?
i searched various methods and did not see anything direct (yes, you can splice octets, but that only works reliably for `/8`, `/16`, and `/24` networks). there are useful applications in templating, e.g. "i'm planning IPs for a new network, and hosts of a given role always have the host bits `123`, so please give me the full address of that host in this network" |
spacetelescope/jwst | 813749822 | Title: Investigate how to subtract a user supplied master background for NIRSpec MOS data
Question:
username_0: _Issue [JP-1935](https://jira.stsci.edu/browse/JP-1935) was created on JIRA by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=morrison):_
This ticket relates to JP-1916.
The NIRSPEC team would like to be able to supply a user background to subtract. Background subtraction for NIRSpec MOS data is done in calspec 2. Below some of the conversion concerning this ticket from JP-1916.
From Howard:
Due to all the issues associated with differences in wavelength assignments between MOS slits that are POINT vs EXTENDED, and the fact that some of the spec2 calibration steps are wavelength-dependent, we had implemented special handling for all of that within spec2. It was more convenient to do there, because the master background can be applied to POINT slits **before** they've had their flatfield, pathloss, photom, etc. corrections applied and hence we can avoid the hassle of needing to back out those corrections (or equivalently apply the inverse corrections to the master background). This of course had all been worked out for the use case of background slits in the MOS exposure itself being used to provide the master background. Will need to do a bit of thinking about how this would apply in the case of a user-supplied master bkg spectrum. Applying it during calspec3 would naturally entail operations on POINT slits that've had all of their point-like calibrations applied and hence we're in the situation of mismatches between the master bkg spectrum and the science slits. So even the user-supplied master bkg case might need to be handled in the midst of calspec2, instead of calspec3. I'll need to reacquaint myself with the sequencing of operations that we implemented for applying the mbkg subtraction in calspec2 and see if it'll work with a user-supplied master bkg.
The user-supplied case for MOS is going to be tricky no matter where we try to fit it in, due to the way we decided to implement the overall MOS master bkg approach. In order to avoid having to carry along all of the flatfield, pathloss, photom, etc. correction arrays for every slit, in order to use them to decorrect the calibrated master background, we implemented it mid-stream in calspec2 before many of those corrections are applied to any of the slits. In that case we fully calibrate all of the background slits, extract them, combine the extracted spectra into the master bkg, and then go back and uncalibrate the expanded 2D version of the master bkg when it gets applied to all of the uncalibrated 2D science slits. Then we finish the rest of the normal calibration steps for the science slits.
For the case of a user-supplied master bkg spectrum, which I assume will be fully calibrated as an extended source, we might be able to shoehorn that into the flow at the same point at which we would've normally had a fully calibrated spectrum from the background slits to work with. It might work to then apply the same kinds of "uncalibration" to the expanded 2D user spectrum to get it to match each of the uncalibrated science slit instances. Again, I'll need to dive into the code again to remind myself what all we need to do that "uncalibration" of the mbkg spectrum and if it's available to us in the user-supplied scenario.
Status: Issue closed
Answers:
username_0: _Comment by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=bushouse) on [JIRA](https://jira.stsci.edu/browse/JP-1935?focusedCommentId=516644#comment-516644):_
I've reviewed the MOS master background subtraction code that was implemented within the calwebb_spec2 framework and it already has the appropriate hooks in it to accommodate either the calculation of a master bkg spectrum from background-only slits within the MOS exposure or a user-provided master bkg spectrum. In both cases, the dedicated master bkg code used for the NRS MOS mode does the following:
# Process all slitlets in the MOS exposure up through the extract_2d and srctype steps
# The MOS-mode master bkg process then finishes processing of all slitlets up through the photom step, treating **all** slits as extended sources, saving the correction arrays (flatfield, pathloss, barshadow, photom) used for each slitlet
# If no user-supplied mbkg spectrum was given, the fully-calibrated data for each background slitlet goes through the resample_spec and extract_1d steps to create a 1D spectrum for each background slitlet and then those are combined into a master 1D bkg spectrum by calling the combine_1d step
# If a user-supplied mbkg spectrum was given, step 3 is skipped and the user-supplied mbkg spectrum is substituted into the processing flow
# The mbkg spectrum (either derived or user-supplied) is expanded to the 2D space of each MOS slitlet
# Each 2D background "image" is processed in **inverse** mode through the photom, barshadow, pathloss, and flatfield steps to *un*calibrate the background data (using the correction arrays for each slitlet that were saved in step 2) to match the background signal that's present in each slitlet
# The uncalibrated 2D bkg is subtracted from the 2D data for each slitlet
# The remaining normal calibration steps (flatfield, pathloss, barshadow, photom, resample_spec, extract_1d) are applied to all of the background-subtracted slitlets
In addition to this flow appearing to be correct on paper, it appears to work properly in practice as well. I've verified this by processing a MOS exposure that contains a background slitlet through the normal master bkg scheme in calwebb_spec2, i.e. following steps 1-3 and 5-8 above, saving the resulting 1D master bkg spectrum created in step 3 as an optional product. I then reran calwebb_spec2 on the same exposure, but this time supplying the 1D master bkg spectrum via the `user_background` parameter, hence skipping step 3. The final results for all slitlets in the cal, s2d, and x1d products were exactly identical for both processing runs.
username_0: _Comment by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=bushouse) on [JIRA](https://jira.stsci.edu/browse/JP-1935?focusedCommentId=516646#comment-516646):_
Based on these results I believe we can say that we have a valid mechanism for applying a user-supplied mbkg spectrum to NRS MOS exposures via the existing code in the calwebb_spec2 pipeline (as opposed to calwebb_spec3). So I'm going to close this ticket and move on to JP-1960 where we intend to document all of this in a way that's hopefully more clear to users (and ourselves). |
EarthMC/Issue-Tracker | 403548335 | Title: [Feature]
Question:
username_0: Ok so im not doing this for myself or because im a donator myself but i think all donators (yellow purple and blue) should get player heads
they are helping you keep up the server and all they get is a few commands and putting things on their head. I am grateful for the features i got but i just think all donators should get a skull of their head
Answers:
username_1: 🤔 The heads are one of the only 2 things that gives people an incentive to buy Frost over Yellow.
username_0: yeah but they gave money to the server which is why i think they deserve something good back that can actually be used
username_2: yeah
username_3: This makes no sense. With your logic yellow donators would just get everything that the higher tiers get even though the higher ones have donated much more.
Status: Issue closed
|
cisco/ChezScheme | 222042006 | Title: Compile error on Android ARM
Question:
username_0: I am trying to cross-compile Chez Scheme on my Lenovo tablet running termux (a unix-like environment), with help on Google Groups from Atticus and <NAME>. I performed the following steps:
Step 1: On the host machine (an i3 desktop running Linux Mint)
cd # /home/phil
git clone https://github.com/cisco/ChezScheme.git
cd ChezScheme-master
./configure
sudo make install
mkdir boot/arm32le
cd a6le
make -f Mf-boot arm32le.boot
cd ..
./configure -m=arm32le
./configure --workarea=arm32le
cd arm32le/s
make -f Mf-cross m=a6le xm=arm32le base=../../a6le
cd ../../..
Step 2: On the target machine (a Lenovo ARM tablet running Android 5)
cd # /data/data/com.termux/files/home
scp -r phil@haydn:/home/phil/ChezScheme-master/ .
edit file ChezScheme-master/configure
change /bin/sh to /data/data/com.termux/files/usr/bin/sh
edit file ChezScheme-master/zlib/configure
change /bin/sh to /data/data/com.termux/files/usr/bin/sh
cd ChezScheme-master/arm32le/c
make
While performing this step, make encountered three errors, all in file segment.c:
at line 350, character 7, symbol build_ptr
at line 406, character 26, symbol SYMVAL
at line 406, character 12, symbol Sflonum_value
The first two errors were the same:
cast to 'ptr' (aka 'void *') from smaller integer type 'unsigned int'
The third error was a different message:
cast to 'double *' from smaller integer type 'unsigned int'
The C compiler is gcc, clang version 3.9.1, target aarch64--linux-android, thread model posix.
Please help me compile Chez Scheme.
Answers:
username_1: It sounds like you're building the boot and header files for 32-bit (arm32le) and the c directory for 64-bit. If you mean to build the 32-bit version, you might need to supply the -m32 option to the C compiler.
username_2: Success build chez on android arm works for me.
LOCAL_ARM_MODE := arm
LOCAL_CFLAGS += -g -Wall -DANDROID -DINLINES -DGC_MACROS -DARMV6 -Wpointer-arith -Wall -Wextra -DLIBICONV_PLUG -fPIC -pie -fPIE
[scheme-release-1.2.apk](https://raw.githubusercontent.com/username_2/scheme-lib/master/data/apk/scheme-release-1.2.apk)
username_0: I added a line
CFLAGS = -m32
near the top of ChezScheme-master/arm32le/c/Makefile (above the line that defines C), and received the same three errors.
username_3: What does your C compiler report as it's target now that CFLAGS is set to
target a 32-bit machine? (In your original email you indicated it was
targeting aarch64--linux-android, which is definitely a 64-bit
architecture).
-andy:)
username_0: When I perform 'make' with the provided Makefile, as I did before my first message, I get the following error:
gcc -Wpointer-arith -Wextra -Werror -O2 -c -DARMV6 -I../boot/arm32le -I../zlib segment.c
segment.c:350:7: error: cast to 'ptr' (aka 'void *') from smaller integer type 'unsigned int'
[-Werror,-Wint-to-void-pointer-cast]
if (build_ptr(base, 0) == addr && base + nact != ((uptr)1 << (ptr_bits - segment_offset_bits)) - 1)
^
./types.h:109:25: note: expanded from macro 'build_ptr'
#define build_ptr(s,o) ((ptr)(((uptr)(s) << segment_offset_bits) | (uptr)(o)))
^
segment.c:406:26: error: cast to 'ptr *' (aka 'void **') from smaller integer type 'unsigned int'
[-Werror,-Wint-to-pointer-cast]
(iptr)(Sflonum_value(SYMVAL(S_G.heap_reserve_ratio_id)) * S_G.number_of_nonstatic_segments);
^
../boot/arm32le/equates.h:758:22: note: expanded from macro 'SYMVAL'
#define SYMVAL(x) (*((ptr *)((uptr)(x)+5)))
^
segment.c:406:12: error: cast to 'double *' from smaller integer type 'unsigned int'
[-Werror,-Wint-to-pointer-cast]
(iptr)(Sflonum_value(SYMVAL(S_G.heap_reserve_ratio_id)) * S_G.number_of_nonstatic_segments);
^
../boot/arm32le/scheme.h:101:29: note: expanded from macro 'Sflonum_value'
#define Sflonum_value(x) (*((double *)((uptr)(x)+6)))
^
3 errors generated.
make: *** [Makefile:29: segment.o] Error 1
Then when I add a line 'CFLAGS = -m32' to the Makefile, I get exactly the same error, as if CFLAGS is not recognized, and in fact that appears to be the case, since the gcc line at the top of the output does not include the -m32 flag. Next I removed the 'CFLAGS = -m32' line and wrote the -m32 directly in the definition of the C variable. Here is the output of that command:
$ make
gcc -m32 -Wpointer-arith -Wextra -Werror -O2 -c -DARMV6 -I../boot/arm32le -I../zlib segment.c
In file included from segment.c:35:
In file included from ./system.h:25:
In file included from ../zlib/zlib.h:34:
In file included from ../zlib/zconf.h:475:
In file included from /data/data/com.termux/files/usr/include/unistd.h:34:
In file included from /data/data/com.termux/files/usr/include/sys/select.h:35:
In file included from /data/data/com.termux/files/usr/include/signal.h:37:
/data/data/com.termux/files/usr/include/asm/sigcontext.h:44:2: error: unknown type name '__uint128_t'
__uint128_t vregs[32];
^
1 error generated.
make: *** [Makefile:29: segment.o] Error 1
I guess that's progress: we've gone from a 64-bit error to a 128-bit error. The result of 'gcc --version', both with and without the -m32 flag, is shown below:
$ gcc --version
clang version 3.9.1 (tags/RELEASE_391/final)
Target: aarch64--linux-android
Thread model: posix
InstalledDir: /data/data/com.termux/files/usr/bin
$ gcc -m32 --version
clang version 3.9.1 (tags/RELEASE_391/final)
Target: arm--linux-android
Thread model: posix
InstalledDir: /data/data/com.termux/files/usr/bin
Thank you for your assistance.
Phil
username_2: @username_0 You should use Android ndk to build chez arm.
username_0: I still don't have this working. I upgraded from clang 3.9.1 to clang 4.0.0, but the error remains the same.
I realize that the error points to something in the compiler, not in Chez Scheme. Do you know a better place to get help?
username_0: Is there anything I can do?
username_2: Just define __int128_t to int
username_4: Android is one of most important platforms now, any plan to support ARM64 @username_1 ?
Chez Scheme is a great fit( small and effective ) of script programming on embeded devices.
username_3: I don't know that we have any particular plans to work on this right now, but we would certainly welcome the effort if someone wanted to take the lead on the porting work.
username_0: I got Chez Scheme to compile on Android within the GnuRoot environment. See
https://programmingpraxis.com/2017/09/15/compile-chez-scheme-on-android-arm/
.
username_2: @username_0 android arm does not suport floating.
username_0: @username_2: On my android arm tablet, when I say (sqrt 47.9) Chez responds
6.920982589199311, exactly the same as my desktop computer.
username_2: @username_0 really? did you see this? https://github.com/cisco/ChezScheme/issues/85
username_0: @username_2: Don't ever doubt me.
[image: Inline image 1]
username_5: https://github.com/racket/racket/commit/e337c65204402ef4faf09f6a848d2d873d0e63a7 |
bennn/iPoe | 109922265 | Title: Windows Support
Question:
username_0: I tried installing ipoe on windows 10. I ran into problems with libedit-3
here is an error message I get when trying to run anything with #lang ipoe at the top
ffi-lib: couldn't open "libedit-3.dll" (The specified module could not be found.; errno=126)
thanks!
Answers:
username_1: Understood! Thanks for reporting.
Which version of Racket are you using?
username_0: Thanks for getting back quickly, I am using v6.2.900.17
*-<NAME>*
username_1: Okay, I think it's [readline's](http://docs.racket-lang.org/readline/index.html?q=readline#%28mod-path._readline%29) fault! I was able to install the [wo-readline](https://github.com/username_1/iPoe/tree/wo-readline) branch of ipoe through Dr.Racket (6.3.1).
Looking into compile-time solutions for skipping readline if it's not available...
username_1: Ok, you should be able to install ipoe through master now.
username_0: Thank you!
username_0: Hello, I got it installed. How can I use ipoe commands through drracket?
thank you
*-<NAME>*
On Mon, Oct 12, 2015 at 6:36 AM, <NAME> <<EMAIL>> wrote:
> Thank you!
>
>
username_1: Hmm, I'm not sure you can right now (unless there's a way to run a custom "raco ..." command).
I think I could update the REPL environment to have the commands available after an ipoe program is loaded. I'll try that soon.
username_0: Thanks, I'm pretty busy right now, tell me if you ever look into that
username_1: ...
After connecting, the repl prompt changes & you can type "help" to see a list of database commands. Someday I'd like smoother integration, but I need to learn more about Dr. Racket first (especially, how to cleanly start & end a database connection).
(How it works now is that everything at the bottom of [ipoe/lang/reader.rkt](https://github.com/username_1/iPoe/blob/master/ipoe/lang/reader.rkt) gets loaded in the repl)
username_0: Thanks, I'll try that! |
russoedu/colorvalidator | 191998756 | Title: Not working in Laravel 5.3
Question:
username_0: Hello,
I've followed the instructions but I got the following error:
`ErrorException in ColorValidatorServiceProvider.php line 80:
Illegal string offset 'image_size'
in ColorValidatorServiceProvider.php line 80
at HandleExceptions->handleError('2', 'Illegal string offset 'image_size'', '/path/to/vhost/vendor/username_1/colorvalidator/src/ColorValidatorServiceProvider.php', '80', array('rule' => 'image_size', 'method' => 'ImageSize', 'translation' => 'color-validator::validation')) in ColorValidatorServiceProvider.php line 80
at ColorValidatorServiceProvider->extendValidator('image_size') in ColorValidatorServiceProvider.php line 68
at ColorValidatorServiceProvider->addNewRules() in ColorValidatorServiceProvider.php line 42
at ColorValidatorServiceProvider->boot()
at call_user_func_array(array(object(ColorValidatorServiceProvider), 'boot'), array()) in Container.php line 508
at Container->call(array(object(ColorValidatorServiceProvider), 'boot')) in Application.php line 769
at Application->bootProvider(object(ColorValidatorServiceProvider)) in Application.php line 752
at Application->Illuminate\Foundation\{closure}(object(ColorValidatorServiceProvider), '13')
at array_walk(array(object(EventServiceProvider), object(RoutingServiceProvider), object(AuthServiceProvider), object(CookieServiceProvider), object(DatabaseServiceProvider), object(EncryptionServiceProvider), object(FilesystemServiceProvider), object(FoundationServiceProvider), object(NotificationServiceProvider), object(PaginationServiceProvider), object(SessionServiceProvider), object(ViewServiceProvider), object(LaravelJsLocalizationServiceProvider), object(ColorValidatorServiceProvider), object(IdeHelperServiceProvider), object(AppServiceProvider), object(AuthServiceProvider), object(EventServiceProvider), object(RouteServiceProvider), object(TranslationServiceProvider), object(ValidationServiceProvider)), object(Closure)) in Application.php line 753
at Application->boot() in BootProviders.php line 17
at BootProviders->bootstrap(object(Application)) in Application.php line 203
at Application->bootstrapWith(array('Illuminate\Foundation\Bootstrap\DetectEnvironment', 'Illuminate\Foundation\Bootstrap\LoadConfiguration', 'Illuminate\Foundation\Bootstrap\ConfigureLogging', 'Illuminate\Foundation\Bootstrap\HandleExceptions', 'Illuminate\Foundation\Bootstrap\RegisterFacades', 'Illuminate\Foundation\Bootstrap\RegisterProviders', 'Illuminate\Foundation\Bootstrap\BootProviders')) in Kernel.php line 253
at Kernel->bootstrap() in Kernel.php line 144
at Kernel->sendRequestThroughRouter(object(Request)) in Kernel.php line 116
at Kernel->handle(object(Request)) in index.php line 53`
Answers:
username_1: It seems like you are mixing two validator in this array… image size and color:
`'rule' => 'image_size', 'method' => 'ImageSize', 'translation' => 'color-validator::validation'`
username_0: Hello,
I'm only interested in the hex color validator and I haven't used this image_size validation you mentioned - at least not on purpose.
What I did was basically following the steps and adding the rule "color" => "hex_color" like in your example.
Since I found another way around this validation this issue is not urgent but nonetheless I wanted to let you know about this.
Kind regards
username_1: Ok, but this is what's showing in the error log.
I tested in 5.3 and couldn't reproduce the error.
--
----
Eduardo *Russo*
(011) 99527-7809
[image: Google Talk] + [image: MSN] <EMAIL>
[image: Skype:] username_1
[image: about me] about.me/username_1 |
CS2113-AY1819S1-T16-3/main | 376844206 | Title: None
Question:
username_0: Hi, to clarify 29/02/2020 is a valid date in the leap year 2020.
See screenshot below:
<img width="806" alt="screenshot 2018-11-05 at 1 42 54 pm" src="https://user-images.githubusercontent.com/35729747/47980034-c7045680-e100-11e8-9a3d-fbcacdbbb4fa.png">
Hence, the program will accept that leave application.<issue_closed>
Status: Issue closed |
department-of-veterans-affairs/va.gov-team | 557740726 | Title: CT 116: Remove dead VET TEC search code
Question:
username_0: As a developer, I need to remove unused VET TEC search code so that there is not dead code in the repository.
## Assumptions:
1. N/A
## Acceptance Criteria
1. Existing VET TEC search functionality is not impacted by the removal.
2. All dead VET TEC search code is removed from the vets-website repository.
3. All dead VET TEC search code is removed from the GIDS repository.
Answers:
username_1: PR: https://app.zenhub.com/workspaces/vft-59c95ae5fda7577a9b3184f8/issues/department-of-veterans-affairs/va.gov-team/5343
Can be tested on vets-website branch `bah-institution-cleanup`
username_1: GIDS PR: https://github.com/department-of-veterans-affairs/gibct-data-service/pull/564
Can be tested on gibct-data-service branch `bah-remove-dead`
username_2: Testing passes QA
username_2: In staging closing story.
Status: Issue closed
|
openresty/openresty | 1114206770 | Title: Test OpenResty 1.21.4.1 RC1 on arm64 (m1)- lj_mem_realloc error
Question:
username_0: Hi, I have tested OpenResty 1.21.4.1 RC1 on a macbook with m1 chip.
After a few stress tests, I got this error:
```LuaJIT ASSERT lj_gc.c:872: lj_mem_realloc: allocated memory address 0xffffaae23010 outside required range```
So, I did some research and found this fix for luajit:
https://github.com/neovim/neovim/issues/13760
Did you include it in the rc1 version? If not, do you have any a plan to include it?
I also tested an older version of openresty, for example 1.17.8.2, I got simliar error:
```lj_gc.c:824: lj_mem_realloc: Assertion `(1 ? (((uint64_t)(uintptr_t)((p)) >> 47) == 0) : 1 ? ((uintptr_t)((p)) == (uint32_t)(uintptr_t)((p))) :1)' failed.```
but it looks like it was fixed.
Status: Issue closed
Answers:
username_0: Hi, I have tested OpenResty 1.21.4.1 RC1 on a macbook with m1 chip.
After a few stress tests, I got this error:
```LuaJIT ASSERT lj_gc.c:872: lj_mem_realloc: allocated memory address 0xffffaae23010 outside required range```
So, I did some research and found this fix for luajit:
https://github.com/LuaJIT/LuaJIT/issues/671
Did you include it in the rc1 version? If not, do you have any a plan to include it?
I also tested an older version of openresty, for example 1.17.8.2, I got simliar error:
```lj_gc.c:824: lj_mem_realloc: Assertion `(1 ? (((uint64_t)(uintptr_t)((p)) >> 47) == 0) : 1 ? ((uintptr_t)((p)) == (uint32_t)(uintptr_t)((p))) :1)' failed.```
but it looks like it was fixed.
Status: Issue closed
|
FZUG/repo | 120436821 | Title: Fedora23 无法安装Chrome浏览器
Question:
username_0: 按照wiki指导安装,但是出现问题,bash输出如下:
warning: /var/cache/dnf/google64-376e8cc1298accba/packages/google-chrome-stable-47.0.2526.73-1.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY
Curl error (28): Timeout was reached for https://dl-ssl.google.com/linux/linux_signing_key.pub [Connection timed out after 120000 milliseconds]
The downloaded packages were saved in cache till the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Answers:
username_1: 将repo中的 `gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub`
改成`gpgkey=https://dl.google.com/linux/linux_signing_key.pub` 即可
Status: Issue closed
|
zuck/alighieri | 308118105 | Title: Feature Request: An electron App?
Question:
username_0: It is a nice editor but would shine if it was an electron app.
Answers:
username_1: Yes, I will port the whole codebase to latest version of Quasar Framework and then add an official electron app.
Actually it's already there, but you have to build it by yourself typing:
`quasar build && npm run package-electron [linux|darwin|win]`
username_1: Starting from 0.8.x releases you can simply build it with:
```
npm install && quasar build -m electron
```
username_1: Add a GitHub Actions workflow to release a packaged version of the Electron app on each new tag:
https://github.com/username_1/alighieri/releases/tag/v0.9.4
Status: Issue closed
|
Breta01/handwriting-ocr | 410742201 | Title: I work on 64 bit :Is there any other way to extract the data
Question:
username_0: Hello Breta, Thank you so much for the responding i could accomplish the task successfully!
Status: Issue closed
Answers:
username_1: Can you better specify the problem? If you are working on 64bit architecture there shoudn't be any problem with the code. Or what exactly do you mean by 64 bit?
username_0: Hello Breta, Thank you so much for the responding i could accomplish the task successfully!
Status: Issue closed
|
rapidsai/cudf | 1094906376 | Title: [FEA] Refactor cuDF Python merging logic for index types
Question:
username_0: **Is your feature request related to a problem? Please describe.**
pandas only supports merge operations for DataFrame and Series objects. cudf's merging code also supports merges involving Index objects. This support makes the internals of the merge code excessively convoluted and likely introduces performance overheads for user-facing merge APIs due to the additional logic required to handle index objects (and therefore, to handle input objects that do not themselves have indexes).
**Describe the solution you'd like**
We cannot simply disable merges for index objects because various internal code paths in cudf assume that Index objects may be merged. Therefore, we should separate logic for merging Indexes from the implementation of the public merge APIs. By identifying the exact use cases for index merging we may be able to significantly accelerate code paths relying on these merges since the implementation of such a merge is likely to be much simpler than the current merge implementation, which has to handle all the complexities associated with the pandas merge API. The change should also save us from needing to introduce complex multiple dispatch patterns as proposed in https://github.com/rapidsai/cudf/pull/9807#discussion_r769130492. |
raysun/SidekickIssues | 269737428 | Title: /best - Support sub commands (Gold, Elixir, etc)
Question:
username_0: /best [5 to 15] = show top # on all categories (What we have right now)
/best [trophies, versus trophies, gold, elixir, DE, donation] = show that particular category for the whole members
Answers:
username_0: This feature is no longer need it after /best 50 supported
Status: Issue closed
|
fsprojects/fantomas | 509455394 | Title: Improve formatting of lamba between parenthesis
Question:
username_0: ### Description
input:
```fsharp
let square = (fun b ->
b*b
prinftn "%i" b*b
)
```
formatted:
```fsharp
let square =
(fun b ->
b * b
prinftn "%i" b * b)
```
### Repro code
Add a link [the online fantomas tool](http://ratatosk.dynu.net/fantomas/) where you prove that code doesn't format.<issue_closed>
Status: Issue closed |
haizlin/fe-interview | 1006808208 | Title: [html] 第893天 页面布局时是不是节点越少越好?为什么?
Question:
username_0: 第893天 页面布局时是不是节点越少越好?为什么?
[3+1官网](http://www.h-camel.com/index.html)
[我也要出题](http://www.h-camel.com/contribution.html)
Answers:
username_1: 如果是一些简单的页面那么节点确实是越少越好 因为节点多就会对页面的渲染速率有影响,但是对于一些复杂的页面节点过少也达不到项目预期的效果,所以说节点并不一点是越少越好,页面的节点需要根据页面所达成的效果来说,而且繁缛不需要的节点是不应该有的 |
aws/aws-sdk-ruby | 55847788 | Title: CloudFront 2014-11-06 API removed
Question:
username_0: In 5ee6403ad8f2ad235ba4ec96c6b4b53d6aa5607e (released in [1.59.1](https://github.com/aws/aws-sdk-ruby/blob/master/CHANGELOG.md#1591-2014-12-03)), the 2014-11-06 version of the CloudFront API was removed, and replaced with the older 2014-10-21 API. The commit description doesn't offer any insight as to why, other than that it was supposed to correct an "incorrect API version".
I couldn't find an issue specifically talking about this change; can anyone shed some light on why this version was removed?
cc @username_1 since you authored the commit :grinning:
Answers:
username_1: I need to follow up on this. At the time, it was at the request of the service team.
username_1: I've added it back now and it should go out with the next release.
Status: Issue closed
|
igbopie/spherov2.js | 396705590 | Title: bluetooth-hci-socket not installing
Question:
username_0: Hi,
I've recently got a sphero mini and found your repo. I'm having difficulties unfortunately with the installation. Specifically when I do
```
yarn install
```
I get build errors when it comes to bluetooth-hci-socket. I've also never done javascript before (hoping this would be a great way to learn!) so I researched into this and there seems to be a general problem with bluetooth-hci-socket and higher node versions. I found that going back to version 8 would fix the problem, however if I do that then I encounter the error:
```
The engine "node" is incompatible with this module. Expected version ">=10.0.0". Got "4.2.6"
```
Thanks!
Answers:
username_1: Hi!
Sorry for the delay, looks like your node version is not 8, is 4.2.0. Also, for some other issues with other lib, I had to specify node > 10. You can manually disable that restriction removing the `engine` section in the package json.
Let me now!
username_1: Yes, looks like for the raspberry, in order to compile `bluetooth-hci-socket` you need node 8:
https://github.com/noble/noble/issues/253
Maybe I need to remove that node restriction.
username_1: Removed the restriction. You should use node 8. I need to update docs to reflect that.
username_2: bluetooth-hci-socket also won't compile on node 10: https://github.com/noble/node-bluetooth-hci-socket/issues/84
It seems both noble and bluetooth-hci-socket are abandoned, so the abandonware forks should be used..
username_1: Yes, but I made a specific fix for raspberry. I need to test that too.
username_3: I can't install it in Raspberry Pi 4, either. Nodejs v10.15.2. I had used the abandonware version fine when trying to use cylon.js. |
dynamicslab/pysindy | 1129501220 | Title: Add option to perform operations in optimizer as sparse operations
Question:
username_0: ## Is your feature request related to a problem? Please describe.
In a project, I have come across the problem that a large set of features will grow exponentially with the degree of the polynomials I use in the Polynomial Library, this leads to huge memory demands.
## Describe the solution you'd like
I have already created a fork and swapped out some of the numpy operations in the Constrained SR3 optimizer with scipy.sparse operations. For the Constrained SR3 case, the np.kron() operation on line 219 can especially lead to large memory demands. Further, since most of the matrices in the optimizer (and I assume in the other optimizers as well) are highly sparse, it is possible to reduce memory demands significantly. I have noticed that the memory usage has gone down from about 64-512GB RAM to being able to run the code on my laptop, by replacing the operations with sparse operations. Not only that, but the code seems to run roughly 2 to 3 magnitudes faster. I think it may be possible to do this for more optimizers and make this a standard option during training, note that in my fork I have added an argument to the Constrained SR3: `sparsity` that you can set to True to train the model with sparse matrices and operations. My current implementation is far from perfect, but I wanted to alert you to the possibility of speeding up the code and decreasing memory usage, which may open up research with larger feature sets.
## Describe alternatives you've considered
-
## Additional context
This is my fork [pysindy_sparse](https://github.com/username_0/pysindy)
Note that I have done a quick test to compare the two methods, from my own experience the timing differences scale really quickly and so does the memory usage. For example, with around ~250 features I need to allocate 64GB (for degree=1) and 128GB (for degree=2) on my cluster to run similar code for a project. I can now run this with degree=3 on my personal laptop without any problem.
This is the code for each setup, the first code snippet runs my first naive implementation and the second the current pysindy repo.
```import sys
sys.path.append("/Users/egeenjaar/pysindy_sparse")
import pysindy as ps
import numpy as np
import time
np.random.seed(42)
data = np.random.rand(1200, 60).astype(np.float32)
library = ps.PolynomialLibrary(include_interaction=False, degree = 2)
library.fit(data)
n_features = library.n_output_features_
lhs = np.zeros((60, 60 * n_features), dtype=np.float32)
for i in range(60):
lhs[i, i+1+i*n_features]=1
rhs = np.ones((60,), dtype=np.float32)
diff = ps.SmoothedFiniteDifference()
optimizer = ps.ConstrainedSR3(constraint_rhs=rhs, constraint_lhs=lhs, threshold=0.1, max_iter=30, verbose=True,
sparsity=True)
model = ps.SINDy(optimizer=optimizer, feature_library=library, differentiation_method=diff)
start_time = time.perf_counter()
model.fit(data, t=1)
end_time = time.perf_counter()
print(f'Total time: {end_time - start_time}')
print(model.coefficients().toarray())```
**Output**
```Iteration ... |y - Xw|^2 ... |w-u|^2/v ... R(u) ... Total Error: |y - Xw|^2 + |w - u|^2 / v + R(u)
0 ... 5.4201e+02 ... 7.4341e+00 ... 7.2600e+01 ... 6.2205e+02
3 ... 5.4180e+02 ... 5.6356e+00 ... 7.2600e+01 ... 6.2004e+02
6 ... 5.4180e+02 ... 5.6361e+00 ... 7.2600e+01 ... 6.2004e+02
9 ... 5.4180e+02 ... 5.6361e+00 ... 7.2600e+01 ... 6.2004e+02
Total time: 0.9769660609999999
[[-0.24633193 1. 0. ... 0. 0.
0. ]
[ 0. 0. 1. ... 0. 0.
0. ]
[-0.17271745 0. 0. ... 0. 0.
0. ]
...
[-0.2733521 0. 0. ... -0.94027844 0.
0. ]
[ 0. 0. 0. ... 0. -0.94705056
[Truncated]
[[-0.24639121 1. 0. ... 0. -0.
0. ]
[-0. 0. 1. ... 0. -0.
-0. ]
[-0.17289819 0. 0. ... 0. -0.
-0. ]
...
[-0.27381932 -0. -0. ... -0.94027504 -0.
-0. ]
[-0. 0. -0. ... -0. -0.94705407
0. ]
[-0.17056681 -0. 0. ... 0. -0.
-0.9382435 ]]
```
As you can see the coefficient matrices look the same (this needs further testing) and the timings are 0.97 vs 8.8 (I have noticed between a 50-100x speedup with ~250 features). My current code is not at all ready to merge, because I first wanted to bring up this issue + discuss possible ways to integrate this into your codebase and/or other optimizers.
Kind regards,
<NAME>
Answers:
username_1: Nice job with this (also thank for you being thorough!) and I think you're right that all the optimizers would get several speedup factors from this changes. I will put it on my to-do list. In principle this would be a series of relatively simple changes of the code but would require a lot of testing to make sure we don't screw up each optimizer.
username_1: @znicolaou @username_0 A word of caution for altering these algorithms. One issue that comes up is that some of the operations (especially in the constrained and trapping SR3 optimizers) are dependent on a SVD (importantly, the pinv function). Often these inverse operations are performed on sparse matrices (the model coefficients or the constraints), meaning the matrices are often very ill-conditioned. For functions like pinv, there is a threshold value "rcond" such that singular values < rcond get truncated to zero. In some situations this can change the results of the algorithm. In some cases this may make the model fit better or worse, but mostly I wanted to warn you that you may change the function **correctly** but still produce slightly different models than before! |
openwrt/luci | 733651865 | Title: Luci errors on master
Question:
username_0: Fresh master branch build, Luci system rpc seems broken

Answers:
username_2: same problem.
username_3: same problem.

username_4: same problem.

username_5: Can confirm this is indeed broken.
username_0: Rofl missing quotation mark, someone didn't run test.
username_6: Fixed by #4560
Status: Issue closed
username_7: This is back, right(?):
RPC call to luci.wireguard/getWgInstances failed with error -32000: Object not found
at ClassConstructor.handleCallReply (http://192.168.1.1/luci-static/resources/rpc.js?v=git-21.226.86205-376af36:11:3)

username_8: @username_7
Confirm issue come back
`RPCError
RPC call to luci.wireguard/getWgInstances failed with error -32000: Object not found
at ClassConstructor.handleCallReply (https://192.168.16.1/luci-static/resources/rpc.js?v=git-21.267.65414-1d9067b:11:3)`
username_7: So this needs to be reopened, right? Let us know what extra info if any needed!
username_9: I think this issue needs to be re-open.
I get the getFeatures error if busybox symlinks are installed in /bin instead of /usr/sbin. The error is caused by missing /usr/bin/env |
flutter/flutter | 170951373 | Title: Can we start our apps with a white screen on Android?
Question:
username_0: The Gallery apps starts up with a black screen, before it draws the app itself. Opening many other apps on Android, I notice they all start with a white screen.
(I tested Play Store, Play Music, Android Pay, calculator. Note: the clock starts blue. So many the app can pick which initial color it fills with?)
Answers:
username_0: Not sure at which level we can control the initial color, so I started with "affects: framework" :)
username_1: We should start with the fully rendered app, not black or white or blue.
username_0: Even better, but in the meantime, we should start with a color that we're going to draw into.
username_0: @username_1 pointed out that this is apparently controlled by a config in the AndroidManifest.xml. So hopefully this is an easy one :)
username_2: https://github.com/flutter/flutter/blob/master/examples/flutter_gallery/android/AndroidManifest.xml#L12
I think you just need to change that "black" to white. It's possible we draw black ourselves after we start the GL context before the app loads though, so there might be a flash if you do that.
username_2: There's a way on Android to hold the entrance animation for your app while you boot up so that the app can come in fully rendered. Some Android apps do this, but it's a gamble of course because if you take a look time to start up, the phone can seem unresponsive.
username_1: Yeah, doing that is first predicated on making startup super fast.
username_0: Is that something that the user (in this case, Gallery) can control? Can we ask the engine to draw white immediately, so there isn't a flash?
username_0: On a Nexus 5, I see this:
* black
* draw the home screen
* flash the launcher background
* draw the home screen
I definitely see the launcher background flash in during the initial load.
username_0: Just tested with the latest build. I can confirm the flash on startup is still there.
username_0: @HansMuller @username_3 this one might be an easy win?
@username_2 says it's just https://github.com/flutter/flutter/blob/master/examples/flutter_gallery/android/AndroidManifest.xml#L12
username_3: Just tried @username_2's suggestion (though the theme name is Light, not white), and it looks better. However, the screen sometimes flashes transparent for a few frames after the white loading screen just before the gallery is painted.
username_0: thanks for the update! I also notice the image for the app bar comes in after the text is drawn. I believe this contributes to some of the flash effect. Not sure if we can have the image drawn at the same time as the rest of the contents of the home screen?
username_0: Just tested the update from @username_3 . Definitely better. :) But the flash is certainly still there on a 5X.
username_0: Slow mo video for your enjoyment! :) https://youtu.be/mjz0p2Hs5Xw?t=1m17s
Notice how, at that timestamp, we see the following:
* White screen drawn
* Then we see the launcher background
* Then we see black
* Then we see the app
* Then we see the images in the app bar appear
If you watch the whole video, you'll see that we see different extremes of flashing. At 38 seconds, we see what is probably correct. At 1 second, we see the white to transparent to black to app.
username_0: Curious what we could do about this. Gallery is certainly flashier than other Android apps when they start up. Is there something else Gallery could do, or is there something the framework could offer us, or is this deeper integration with Android that we need to expose?
Goal is to start without flashing more than the initial Android draw into the app. We'd like to eliminate the flashing of black, and the flashing of transparency. Ideally, we'd also like to draw the image for the app bar at the same time we draw the entire first frame.
username_2: The white expanding card is drawn by Android while loading our app. The transparent flash is when Android switches over to rendering our app but we haven't managed to draw anything with GL yet. The black is the first thing we draw with GL.
We could almost certainly replace the black with white, which would look better. I'm not sure how to remove the transparent flash. There are lots of articles on stack overflow and such talking about this issue. The roughly involves holding one of Android's threads hostage while we init GL on another thread.
After the app loads, we could delay drawing the app until the asset images load, at the cost of showing the app later.
username_0: Is this something the app developer has control over? aka "Draw this color the very instant you get GL" We'd like to change it in the gallery. cc @HansMuller
Re: the rest of the flashing, we'll probably want a way to eliminate the whole flashing effect, but we won't back the gallery for it. I'll move this over to "Flutter 1.0" and open a new issue to change the initial draw of black for the gallery.
username_4: @username_2 I don't think we're doing any `glClear` ourselves in the engine, right? At a first glance, it appears that Skia is doing all the clearing. Would the strategy here for a fix be a flag in `runApp` that enables the user to set up a default color, or should we consider clearing early on a per platform basis with appropriate `glClearColor`?
username_2: I don't think we can wait to load and execute Dart code if we want to get rid of the flashing entirely. We'll need to know the color we want to paint synchronously inside FlutterView.java's surfaceCreated function.
username_4: Good point, which is exactly why I mentioned having default behavior on Android/iOS.
username_2: That pull request handles the transparent and black parts of the flash. Let's handle the image pops in another bug.
Status: Issue closed
username_0: Very cool! Thanks, looks great on the Nexus 5.
username_5: @thenexus00 if you think there should something be done by the Flutter team it's better to create a new issue instead of commenting on a closed one. |
osm-fr/infrastructure | 554318175 | Title: Pb munin sur bzh202
Question:
username_0: Depuis tout début janvier il n'y a plus de graphiques sur
http://munin.openstreetmap.fr/osm25.openstreetmap.fr/bzh202.osm25/index.html
poke @username_1 ou @jocelynj
Answers:
username_1: dans osm127:/var/log/munin/munin-update.log (le master, celui qui collecte les données), il y a :
2020/01/23 18:15:21 [INFO] node bzh202.osm25 advertised itself as bzh202 instead.
2020/01/23 18:15:21 [WARNING] Config node bzh202.osm25 listed no services for bzh202.osm25, (advertised as bzh202). Please see http://munin-monitoring.org/wiki/FAQ_no_graphs for further information.
peut-être qu'une modif été perdue par la maj récente du fichier contenant les hosts
username_1: ajout de host_name bzh202.osm25 dans bzh202:/etc/munin/munin-node.conf + restart munin-node
effet :
2020/01/23 19:55:34 [INFO]: Munin-update finished for node osm25.openstreetmap.fr;bzh202.osm25 (14.56 sec)
les graphs sont réapparus, avec un gros trou
je te laisse confirmer que c'est bon et si oui fermer le ticket
username_0: Ca a l'air ok.
Status: Issue closed
|
GMOD/indexedfasta-js | 843734206 | Title: getSequenceNames() returns an array of string integers instead of the sequence names
Question:
username_0: I'm trying to retrieve the list of sequence names in the fasta file but it seems that the getSequenceNames() method just returns a list of ints (as strings) of length the number of sequences in the fasta. If I run getSequenceSizes(), the names of the sequences in the returned object keys are different (and correct), but if I use getSequenceSize(seqName) with one of the seqNames returned from getSequenceNames() the return is undefined.
```js
seq.getSequenceNames()
[
'0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
'10', '11', '12', '13', '14', '15', '16', '17', '18', '19',
'20', '21', '22', '23', '24', '25', '26', '27', '28', '29',
'30', '31', '32', '33', '34', '35', '36', '37', '38', '39',
'40', '41', '42', '43', '44', '45', '46', '47', '48', '49',
'50', '51', '52', '53', '54', '55', '56', '57', '58', '59',
'60', '61', '62', '63', '64', '65', '66', '67', '68', '69',
'70', '71', '72', '73', '74', '75', '76', '77', '78', '79',
'80', '81', '82', '83', '84', '85', '86', '87', '88', '89',
'90', '91', '92', '93', '94', '95', '96', '97', '98', '99',
... 355 more items
]
seq.getSequenceSizes()
{
'NC_000001.11': 248956422,
'NT_187361.1': 175055,
'NT_187362.1': 32032,
'NT_187363.1': 127682,
'NT_187364.1': 66860,
'NT_187365.1': 40176,
'NT_187366.1': 42210,
'NT_187367.1': 176043,
'NT_187368.1': 40745,
'NT_187369.1': 41717,
'NC_000002.12': 242193529,
'NT_187370.1': 161471,
'NT_187371.1': 153799,
'NC_000003.12': 198295559,
...
}
seq.getSequenceSize('0')
undefined
```
Answers:
username_1: Thanks for catching...I propose #50 to fix (just updating the docs to refer to getSequenceList() instead of getSequenceNames())
Status: Issue closed
|
learn-co-curriculum/setting-up-a-new-site | 121602368 | Title: .gitignore files
Question:
username_0: when accessing this files to copy and paste, as directed by the video (3:30 mark), i was not able to find them. it seems that they were moved elsewhere or no longer available. please provide no path, if available. thank you. heber.
Answers:
username_1: @username_0 I added a direct link in the README.md file thanks for your suggestion. Here is the link as well: https://gist.githubusercontent.com/octocat/9257657/raw/3f9569e65df83a7b328b39a091f0ce9c6efc6429/.gitignore
Status: Issue closed
username_0: Jonathan,
This is great!
Thank you so much.
<NAME>
-----Original Message----- |
void-linux/void-packages | 409259572 | Title: Package Request: openeuphoria
Question:
username_0: # OpenEuphoria [](https://openeuphoria.org)
Here's a pretty obscure programming language that no other distribution has (to my knowledge). It's pretty old language that has gained a niche following once it became open source over a decade ago. I tried packaging it a few years back and gave up after a week of trying. I thought it would be neat to have it in the repos and maybe it will get a few users that way.
### Features:
* General purpose
* Generate C code
* Extensive SFML support
* Extensive Gtk support
* Native IDE
* An active friendly community
* Easy to learn syntax
* Procedural
* Performance focused
* 4 data types
### Code Example:
**_FizzBuzz_**
```
include std/console.e
integer a = 3
integer b = 5
integer c = 100
for i = 1 to c do
if remainder(i,a * b) = 0 then display("FizzBuzz")
elsif remainder(i,a) = 0 then display("Fizz")
elsif remainder(i,b) = 0 then display("Buzz")
else display(i)
end if
end for
```
### Stats:
**Current Release Version:** 4.0.5 (2012)
**Current Beta Version:** 4.1.0 (available on sourceforge)
**Last Github Commit:** Oct 22, 2018
### Links:
[Official Website](https://openeuphoria.org/)
[Wikipedia](https://en.wikipedia.org/wiki/Euphoria_(programming_language))
[Github](https://github.com/OpenEuphoria/euphoria)
[SourceForge (seems deprecated)](https://sourceforge.net/projects/rapideuphoria/)
### Notes:
You may have to build 4.1.0 from sourceforge (or pull from master on github) since 4.0.5 is very old.
Answers:
username_1: 4.1.0 is a newer version, but it's beta, so we would most likely package the stable release (4.0.5) |
heiseonline/shariff | 200116356 | Title: dynamic added data-url without effect
Question:
username_0: I'm adding the data-url parameter to the shariff div tag via onReady before loading the sharif.min.js and cann see the added url in the dom tree via page debugger in my browser.
But the generated anchor tags submit the previous page url to the social media websites.
My hope was that the transmitted link would be the data-url address.
Or is that address only used for getting the pagecount from the optional shariff backend? in that case I pledge for the addition of using the data-url parameter as the submitted target url for the social sharing function.
Answers:
username_1: Maybe this could help:
https://github.com/heiseonline/shariff/pull/186 |
trailofbits/ebpfpub | 765999661 | Title: 眉山东坡区哪有特殊服务的洗浴y
Question:
username_0: 眉山东坡区妹子真实找上门服务【十(微)7813╧72524漂亮】月日下午,李宁冠名的叁加壹篮球联赛总决赛在重庆打响。华语说唱第一人周延惊喜现身,与国内顶级嘻哈厂牌共同助阵总决赛,的好兄弟布瑞吉、王齐铭、等悉数亮相,在篮球赛落幕后带来燃炸全场的表演。街篮精神与热血说唱火热交融,让全场球员和观众都为之疯狂。 当日的中国风球服造型也引起了不少网友的注意。据悉,该系列正是李宁与尚未发售的「月下云端」联名款,以中国古代建筑元素为灵感,结合,紫金配色十分抓人眼球,此前曾在国内外潮流界都引起不小的震动。率先上身,演绎活力满满的潮酷风尚,引来粉丝尖叫,还有篮球爱好者在李宁官微指名要求上架“爷同款”,掀起了一股网友自发的国潮热。 这不是第一次“带货”国潮。除了身为说唱风向标、全能音乐人,也是一个实打实的运动爱好者,不仅自己时不时在社交网站上出健身房打卡照,在、赛场低调看球时,也屡屡被网友偶遇。阳光正能量的形象为吸引了各大品牌的目光,其中,李宁和的深度合作,挖掘出了中国风与潮流文化跨界的新内涵,立足经典,华丽转身,被誉为国潮界的天作之合。 出道以来,凭借独树一帜的音乐风格和积极向上的生活态度,一路成长为华语乐坛的级人物。自律、真实的性格以及非凡的时尚触觉,也引来众多媒体瞩目,《》、《芭莎男士》等时尚杂志频频向伸出橄榄枝,让有机会向大众展现出自己的独特表现力,探索音乐与时尚的边界。加鼻驹材挂https://github.com/trailofbits/ebpfpub/issues/2679?85462 <br />https://github.com/trailofbits/ebpfpub/issues/366?vxpez <br />https://github.com/trailofbits/ebpfpub/issues/1255?rdzrr <br />https://github.com/trailofbits/ebpfpub/issues/2836?ljidk <br />https://github.com/trailofbits/ebpfpub/issues/1456?bjrxu <br />https://github.com/trailofbits/ebpfpub/issues/77?99791 <br />https://github.com/trailofbits/ebpfpub/issues/1658?23002 <br />nabpxawpnqitnrdaruiocowskfcdvagsfga |
daler/pybedtools | 1085769046 | Title: POS-field in VCF files and intervals derived from them
Question:
username_0: So why is the VCF parser treating the POS field like this:
https://github.com/username_1/pybedtools/blob/ffe0d4bd2f32a0a5fc0cea049ee73773c4d57573/pybedtools/cbedtools.pyx#L678-L679
as opposed to this line (for GFF):
https://github.com/username_1/pybedtools/blob/ffe0d4bd2f32a0a5fc0cea049ee73773c4d57573/pybedtools/cbedtools.pyx#L691
AFAIK, VCF positions in POS-field are 1-based and therefor should be treated by the parser in the same way as GFF files
```
start = int(fields[1]) - 1
end = int(fields[1])
```
Operations (intersections/etc) work as expected, but the statement in the documentation feels wrong since in the case of VCF intervals `Interval.start` lists the 1-based coordinates.
Please explain.
cheers,
dom
Answers:
username_1: Yep, you're totally right. I had no tests for this, and it has gone unreported for a decade now!
In https://github.com/username_1/pybedtools/pull/359/commits/8a8997c7c77e6b5ec0bcc4f8bbe2ae4fb5a56aaf this is now fixed. Thanks for reporting.
Status: Issue closed
|
terascope/file-assets | 348529116 | Title: rename to file_exporter
Question:
username_0: With the way this ended up evolving lets rename the processor to `file_exporter` and change the way it's configured.
Remove the `d2f` and `jsonIn` options and replace them with a single `format` option. Valid formats should be `csv`, `tsv`, `json` and `txt`. `json` assumes the input is JSON that needs to be serialized and `txt` just takes the input and writes it out one record per line.
Answers:
username_0: Completed in #4
Status: Issue closed
|
renode/renode | 876890016 | Title: sysbus.cpu LogFunctionNames true -- not working correctly
Question:
username_0: Renode seems to not be getting the symbol information. Or, it doesn't understand mangled C++ names.
It seems every instruction prints out a log line.
```
14:46:04.4494 [INFO] cpu: Entering function tflite:: at 0x4003CC3C
14:46:04.4495 [INFO] cpu: Entering function tflite:: at 0x4003CC40
14:46:04.4495 [INFO] cpu: Entering function tflite:: at 0x4003CC44
14:46:04.4496 [INFO] cpu: Entering function tflite:: at 0x4003CC48
14:46:04.4496 [INFO] cpu: Entering function tflite:: at 0x4003CC4C
etc.
```
Also,
```
sysbus FindSymbolAt `sysbus.cpu PC`
```
just returns "tflite::".
If I run 'nm' on the software.elf file, it does seem to have the TFLITE function names, although C++ mangled.
To reproduce in CFU-Playground,
* clone and run `scripts/setup`
* cd to proj/proj_template_v
* run `make -j8 renode`
* in the monitor, type `sysbus.cpu LogFunctionNames true`
* in the uart, type '1' three times to navigate the menu to run an inference
Even in basic C code, I'm seeing more printouts than necessary:
```
14:52:07.9404 [INFO] cpu: Entering function readchar at 0x4005C3A0
14:52:07.9404 [INFO] cpu: Entering function readchar at 0x4005C3A4
14:52:07.9404 [INFO] cpu: Entering function readchar at 0x4005C388
14:52:07.9404 [INFO] cpu: Entering function uart_read_nonblock (entry) at 0x4005C5FC
14:52:07.9405 [INFO] cpu: Entering function uart_read_nonblock at 0x4005C600
14:52:07.9405 [INFO] cpu: Entering function uart_read_nonblock at 0x4005C604
14:52:07.9405 [INFO] cpu: Entering function uart_read_nonblock at 0x4005C608
14:52:07.9406 [INFO] cpu: Entering function uart_read_nonblock at 0x4005C60C
14:52:07.9406 [INFO] cpu: Entering function uart_read_nonblock at 0x4005C610
14:52:07.9406 [INFO] cpu: Entering function uart_read_nonblock at 0x4005C614
14:52:07.9406 [INFO] cpu: Entering function readchar at 0x4005C38C
14:52:07.9407 [INFO] cpu: Entering function readchar at 0x4005C3A0
14:52:07.9407 [INFO] cpu: Entering function readchar at 0x4005C3A4
```
Answers:
username_1: Do you mean that you see the same symbol reported many times in a row?
This is an expected behaviour as in fact we are logging each executed block (being a sequence of successive instructions).
You should see that PC values at the end of messages change. If not it might suggest executing a loop.
BTW, some entries have the `(entry)` fragment which means that this is the first instruction in the function.
username_1: https://github.com/renode/renode/commit/8dad446ccbbee0f019a307ac214ead9be6d903bd fixes demangling of symbol names.
Could you check if it works fine for you now, @username_0.
As for the repeated "Entering function X" log entries we are planning to add an option to limit those (but this is a separate task).
Status: Issue closed
|
imdone/imdone | 564962679 | Title: Relative image directory is from imdone root not file of card
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
Relative image directory is from imdone root not file of card
**To Reproduce**
Add a card. Put image tage in card relative to the fiel card in in.
**Expected behavior**
Image displays in card<issue_closed>
Status: Issue closed |
openssl/openssl | 824288071 | Title: Rework EVP init API's with OSSL_PARAM arguments
Question:
username_0: Follow on to #11007, once the OTC decided what to do
Status: Issue closed
Answers:
username_0: Nothing to see here. Getting #14383 through review is sufficient since no changes were mandated by the OTC discussion.
username_0: Follow on to #11007, once the OTC decided what to do
Status: Issue closed
|
liubrook/FE-Weekly-Questions | 648820715 | Title: Chrome 打开一个页面需要启动多少进程?分别有哪些进程?
Question:
username_0: 浏览器从关闭状态进行启动,然后新开 1 个页面至少需要 1 个网络进程、1 个浏览器进程、1 个 GPU 进程以及 1 个渲染进程,共 4 个进程;后续再新开标签页,浏览器、网络进程、GPU进程是共享的,不会重新启动,如果2个页面属于同一站点的话,并且从a页面中打开的b页面,那么他们也会共用一个渲染进程,否则新开一个渲染进程。
最新的 Chrome 浏览器包括:1 个浏览器(Browser)主进程、1 个 GPU 进程、1 个网络(NetWork)进程、多个渲染进程和多个插件进程。
- 浏览器进程:主要负责界面显示、用户交互、子进程管理,同时提供存储等功能。
- 渲染进程:核心任务是将 HTML、CSS 和 JavaScript 转换为用户可以与之交互的网页,排版引擎 Blink 和 JavaScript 引擎 V8 都是运行在该进程中,默认情况下,Chrome 会为每个 Tab 标签创建一个渲染进程。出于安全考虑,渲染进程都是运行在沙箱模式下。
- GPU 进程:其实,Chrome 刚开始发布的时候是没有 GPU 进程的。而 GPU 的使用初衷是为了实现 3D CSS 的效果,只是随后网页、Chrome 的 UI 界面都选择采用 GPU 来绘制,这使得 GPU 成为浏览器普遍的需求。最后,Chrome 在其多进程架构上也引入了 GPU 进程。
- 网络进程:主要负责页面的网络资源加载,之前是作为一个模块运行在浏览器进程里面的,直至最近才独立出来,成为一个单独的进程。
- 插件进程:主要是负责插件的运行,因插件易崩溃,所以需要通过插件进程来隔离,以保证插件进程崩溃不会对浏览器和页面造成影响。 |
Caliburn-Micro/Caliburn.Micro | 530213618 | Title: Fail to open the source code (the Caliburn.Micro.sln file)
Question:
username_0: When I open the CM's solution file Caliburn.Micro.sln, I get an error message `one or more projects were not loaded correctly`. And the Output shows these sentences
```
Caliburn.Micro\src\Caliburn.Micro.Core\Caliburn.Micro.Core.csproj : error : The expression "[System.IO.Path]::GetDirectoryName('')" cannot be evaluated. The path is not of a legal form. C:\Users\XXX\.nuget\packages\msbuild.sdk.extras\2.0.54\Sdk\Sdk.props
Caliburn.Micro\src\Caliburn.Micro.Platform\Caliburn.Micro.Platform.csproj : error : The expression "[System.IO.Path]::GetDirectoryName('')" cannot be evaluated. The path is not of a legal form. C:\Users\XXX\.nuget\packages\msbuild.sdk.extras\2.0.54\Sdk\Sdk.props
```
I note that I am able to open the solution file of older CM's source code without error. I am using VS2017 with the latest Update.

Answers:
username_1: This looks very similar to https://github.com/novotnyllc/MSBuildSdkExtras/issues/190
Status: Issue closed
|
tonypls/tmbk | 227336938 | Title: Failed Unit Tests in Master
Question:
username_0: Master was updated containing unit tests that fail. This was then merged into the tile graphics branch,
Status: Issue closed
Answers:
username_0: We had created added an input parameter to the game constructor. This meant where the constructor was called with less input parameters we got a null point exception. Fixed by keeping the old constructor but adding a default variable in the existing constructor and creating a new constructor with the extra input parameter as needed. |
19-1-skku-oss/2019-1-OSS-E5 | 453307685 | Title: Github Page 관리
Question:
username_0: Github Page에 자신이 한 일을 기술할 때, 어떤 것은 꼭 적었으면 하는 것이 있으면 자유롭게 이야기해주세요.
Answers:
username_1: 네 알겠습니다!
혹시 조장님 지금 다른 브랜치에서 페이지 작업중이신가요?
username_2: 저 math파일에서 컴파일 오류가 있거나, 파이썬 문법 오류, 스펠링 오류 확인하면서 수정하고
학생들이 이해하기 쉽게 부족한 알고리즘 추가 및 알고리즘 구체적인 주석, 설명 달고 있습니다!
username_3: 안녕하세요! 오늘도 다들 열심히시네요!
저가 오늘 아침부터 틈틈히 README.md랑 WIki 링크랑 업로드 방식을 구상해봤는데 다음 이슈로 관련 업로드 방식 올릴테니까 봐주세요. |
rafayet-monon/surveyor | 678142323 | Title: Deployment to Heroku production failed
Question:
username_0: While running continuous deployment on master Heroku production failed.
<img width="928" alt="Screenshot 2020-08-13 at 10 20 03 AM" src="https://user-images.githubusercontent.com/14927672/90094718-7e0b1880-dd50-11ea-94a9-c410f85a0957.png"> |
sveltejs/svelte | 192582617 | Title: Some ideas for API enhancement
Question:
username_0: - Use something like the [Factory_method_pattern](https://en.wikipedia.org/wiki/Factory_method_pattern) to enable people to write JavaScript without the need for `new` keyword.
- Something similar to [tmpvar / weld](https://github.com/tmpvar/weld) to enable people to write HTML without the need for special template syntax
- API similar to [marko-js / marko](https://github.com/marko-js/marko)
In general I really like the idea of this project and would like to thank @nolanlawson for [tweeting it](https://twitter.com/nolanlawson/status/803711384252870657).
Answers:
username_1: +1 for factory, would be nice to make `new` optional and also easy to achieve
username_2: It's easy enough to make `new` optional but is there a good reason to do so? By adding the `instanceof` check at the start of every component, you've added extra bytes to everyone's application for the sake of allowing developers to write code that's actually *less* idiomatic. (With ES2015 classes, you *have* to use `new`, so over time `new`-less instantiation is going to seem ever more weird and out of place). Not to mention that it's a redundant check for nested components.
If it's purely to prevent errors, then maybe a [development warning](https://github.com/sveltejs/svelte/issues/13) would be a better approach.
@username_0 do you have specific API suggestions? 'Make it like these other projects' isn't something we can work with, unless it's clear what you mean and why it would be an improvement 😀
Status: Issue closed
|
mcollina/docker-loghose | 160627542 | Title: Test fails on node.js v6.x
Question:
username_0: It seems that something in the buffer API did change, parser tests fail when with ```npm test``` e.g.:
```
9) The parser outputs the chunk wrapped in an object:
Error: If encoding is specified then the first argument must be a string
at new Buffer (buffer.js:106:13)
at Object.module.exports.buildBuffer (test/helper.js:28:16)
at Context.<anonymous> (test/parser/parser.js:25:44)
at Context.<anonymous> (test/parser/parser.js:14:5)
```
@username_1, could you pls. look into it? I think only the pareser test is wrong, I had no issues with node version 6.x and docker-loghose.
Once this works, we could update .travis.yml for Node 6.
Answers:
username_1: Sure, I should be able to have a look at it this weeked, thanks!
username_0: Made it easy for you, tests passing on Node v6 with this change. Pls review:
https://github.com/mcollina/docker-loghose/pull/13/commits/8be900d45c50313fb32d90fd289769b7f4475d35
username_0: Better look at https://github.com/mcollina/docker-loghose/pull/13/commits/6fe138465576d800b480c55c75fc74124125a279 :) the previous did fail on node 0.10.
username_0: Solved with https://github.com/mcollina/docker-loghose/pull/13
Status: Issue closed
|
bcgov/name-examination | 341177967 | Title: Stabilization - Conflicts details - NRs and Corps
Question:
username_0: - NR data for need to be loaded into postgres
- Corp data - does it work?
- upon load, highlight and show details of the first one
-- be able to un-select (click again) so not chosen on decision screensh<issue_closed>
Status: Issue closed |
Damianonymous/streamlink-plugins | 794172415 | Title: chaturbate.py plugin
Question:
username_0: How can i change this plugin to work on cosplayercam.com site? same site with chaturbate but just url is different i guess
Answers:
username_1: Use the generic plugin that supports multiple pages, I added it to the archive, plugin is still updated on author's site.
username_2: @username_1 podaj maila. mam wtyczki do cam4 i chaturbate na streamlink 2.0
username_1: @ Showup działa w wersji 2.0
Status: Issue closed
username_1: How can i change this plugin to work on cosplayercam.com site? same site with chaturbate but just url is different i guess
username_1: Wtyczka Showup działa w wersji 2.0. Do Chaturbate i cam4 wystarczy wtyczka generic.py.
Status: Issue closed
|
badges/shields | 396106951 | Title: Lock down services which accept URLs to require https where possible
Question:
username_0: Let's continue to nudge our ecosystem toward using https. For the most part, when we accept and refactor new services, we steer things that way, by using https endpoints and assuming https for user-provided URLs.
Let's push that forward wherever we can, perhaps by requiring https wherever we're accepting a user-provided URL.
In cases where we think there are sources which can't be migrated, we could add a flag `?allow_insecure`. This would provide an extra hurdle, and would nudge them to use https if possible.
Answers:
username_1: I like the sound of that! |
ankane/searchkick | 221294700 | Title: Kaminari max_pages to limit result window too large
Question:
username_0: Hi @username_1 You mentioned in #642 that an option to limit results is 'set max_pages in Kaminari'. Are you sure it works?
I've beed 2 month stucked with large result windows issue, I've tried many things, and my conclution is that the only way to do that is doing with some max_pages in kaminari or total_entries in will_paginate.
Here is a options summary I've done:
1) Increase ES max_result_window:
Not an option for million records. (kill the ES server)
2) Convert to Active Record relation:
Perfomance prohibitive.
Post.search("*", limit: 10000)
posts_ids = @searchkick_items.map{|p| p.id } #This is a HUGE array
@posts = Post.where(id: posts_ids).paginate(page: params[:page], per_page: 100)
3) Limit before passing to Searchkick:
Results has no sense, because we are searching over a subset of 10000
We can do a SQL query before to ensure the resultset has some sence and then query ES, but this is very fancy solution
Post.limit(10000).search("*")
Answers:
username_1: Hey @username_0, don't think I understand what you're asking for help on.
username_0: Hi @username_1 summarizing: You said in #642 that 'set max_pages in Kaminari' is an option for limiting result window. But it doesn't works for me. Did you test it works?
The rest of the post is a summary for future readers that will face the issue. I pointed out the 3 options I've tried for limiting result window for paginating. I
username_2: @username_1, he has a valid use-case in reference to [trying to limit the total number of pages using Kaminari's `max_pages` configuration](https://github.com/username_1/searchkick/issues/642#issuecomment-194982011).
Seems to only work when using Kaminari without Searchkick, which would force the use of updating the Elasticsearch setting of `max_result_window`. Not ideal, but still a viable solution.
Let me know if I can help out or provide some more details 👊
username_0: @username_2 you are right. Kaminari and will_paginate works well without Searchkick.
Updating Elasticsearch setting of max_result_window is not an option when we have millions results or the requirements is to narrow the resultset. In this cases, the only option is use Scrollapi which is also not supported Searchkick
Status: Issue closed
username_1: Cleaning up stale issues. Unfortunately, I don't have the bandwidth to explain how to use max_pages, but maybe Stack Overflow could help.
username_0: Hi @username_1 Why you closed this issue since is not resolved?
I'm afraid you don't understand. max_pages does not works with Searchkick. It's not a thing to search in StackOverflow.
The issue is add support for limit pagination results in: will_paginate and kaminari
username_1: Hey @username_0, it's closed because I don't plan to spend any more time on it. Feel free to submit a PR if you have a working solution to this feature request.
username_0: I understand, thanks. |
scala/bug | 220078226 | Title: highlight non-exceptions & non-annotations to avoid noise when folks are looking at APIs
Question:
username_0: one nice thing about javadoc when looking at a package is it manages to group exceptions and annotations together away from the main API.
right now scaladoc lumps all classes together; so if you've an API with a few core set of traits/classes its very easy to have them hidden by noise of lots of annotations or exceptions
I'd like a way to re-introduce the javadoc groupings, showing annotations and exceptions after the other main APIs.
e.g. the list of types in a package look more like this...
o c Foo
- c Bar
Annotations
c Green
c Blue
Exceptions
c BlahException
c WhatnotException
Then the real meat of an API (the core classes) are not mixed up with exceptions & annotations (of which there are often quite a lot - many of which are simple & not that interesting when browsing).
Also if you know you are choosing which annotation you should use, seeing them all grouped together in a package helps.
I'd argue that developers tend to browse annotations more regularly than exceptions. If you get an exception you are more likely to just start typing the name in the search box to find it quickly.
Extra credit if we can easily hide/show exceptions and/or annotations.
I wonder once we start grouping classes by regular, annotation, exception - would folks ever want the 'flat' view we have now?
Status: Issue closed
Answers:
username_1: this is a nice suggestion. but (post JIRA -> GitHub migration), we're keeping scala/bug focused on bugs |
Roy24/spotifyappEx | 297121221 | Title: Credentials.js
Question:
username_0: This file is missing because you have included in your gitignore.
I don't know if it was just api values or if you had any additional methods in there.
But since you built your project connection with Spotify around this file, it will not start up unless I recreate it. In the future you should just consider using an environment variables locally and then apply the data in your project by passing through those values directly.
That way your project will run and isn't upset because it can't find the credential file. It will just tell me that the values being passed through are undefined.
https://www.npmjs.com/package/react-app-env |
VadimDez/ngx-filter-pipe | 586834009 | Title: How to return Nested null objects
Question:
username_0: ##### Bug Report or Feature Request (mark with an `x`)
```
- [ ] Regression (a behavior that used to work and stopped working in a new release)
- [ ] Bug report -> please search issues before submitting
- [* ] Feature request
- [ ] Documentation issue or request
```
Hello,
This is related to #67,
```
import { Component } from '@angular/core';
import { FilterPipe } from 'ngx-filter-pipe';
@Component({
selector: 'my-app',
templateUrl: './app.component.html',
styleUrls: [ './app.component.css' ]
})
export class AppComponent {
issues: any[] = [{ title: 'issue1', user: {name: 'user1', email: '<EMAIL>' }}, { title: 'issue2' ,user:null}];
userFilter: any = {user:{email: '<EMAIL>' }};
constructor(private filterPipe: FilterPipe) {
console.log(filterPipe.transform(this.issues, this.userFilter));
}
}
```
Is there a way to modify userFilter so it return elements with user=null?
I tried userFilter: any = {user:null}; without success.
Thank you |
hankcs/HanLP | 561969683 | Title: pip install hanlp 出错
Question:
username_0: <!--
Thank you for reporting a possible bug in HanLP.
Please fill in the template below to bypass our spam filter.
以下必填,否则直接关闭。
-->
**Describe the bug**
A clear and concise description of what the bug is.
ERROR: Could not find a version that satisfies the requirement tensorboard<2.2.0,>=2.1.0 (from tensorflow==2.1.0->hanlp) (from versions: 1.6.0rc0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.13.1, 1.14.0, 1.15.0, 2.0.0, 2.0.1, 2.0.2)
**ERROR: No matching distribution found for tensorboard<2.2.0,>=2.1.0 (from tensorflow==2.1.0->hanlp)**
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
```python
```
**Describe the current behavior**
A clear and concise description of what happened.
_**python 3.6 安装 hanlp 出错**_
**Expected behavior**
A clear and concise description of what you expected to happen.
**System information**
**_- OS Platform and Distribution miniconda win10
- Python version:python 3.6_**
- HanLP version:
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
* [ ] I've completed this form and searched the web for solutions.
Status: Issue closed
Answers:
username_1: 【自动回复】你好,感谢反馈。
由于未按要求填写,本issue被自动关闭。请按[issue模板](https://github.com/username_1/HanLP/blob/master/.github/ISSUE_TEMPLATE.md)认真修改,然后耐心等待下周末处理。
咨询类的问题应该发在[论坛](https://bbs.username_1.com/)而不是GitHub上。开源项目维护不易,只能在周末受理[友好的提问](https://github.com/username_1/HanLP/wiki/%E5%A6%82%E4%BD%95%E6%8F%90%E9%97%AE),造成不便还望海涵。
 |
CircleCI-Public/node-orb | 693489403 | Title: Feature: Accept multiple cache paths
Question:
username_0: ## Describe Request:
👋 I was updating a project to run integration tests with Cypress. Cypress requires browsers installed in order to run the tests, and I understand neither versions v2, v3, and v4 of this orb will help me until support for browser image variants is provided for the "newer" docker images of nodejs.

So I stuck to v1 and all was good so far. Eventually, I ran into a caching issue because cypress since v3 and up is installing itself in the `.cache` folder, rather than in the `node_modules` folder as any other node package (see doc link). This orb, and I think every version of it, doesn't allow me to specify **multiple cache paths**. It defaults to `.npm` and does allow to override it, but not to specify more than one.
**Is passing a list value possible?** and, would you consider that change?
The alternatives I have today are:
- Using this orb on v1 and add an extra step to globally install the cypress binary, instead of delegating that to the `npm ci` command.
- Haven't tested, but I assume I could still use this orb on v1 and manually call the `save_cache` step passing the `.cache` folder key. Would look odd since I'm running those inside a "with-cache" named step that is supposed to take care of that for me.
- Not using orbs, write the whole config script as usual. The downside is the verbosity.
## Examples:
## Supporting Documentation Links:
- Cypress NPM caching docs: https://docs.cypress.io/guides/guides/continuous-integration.html#Caching
- Orb v1 / with-cache: https://circleci.com/orbs/registry/orb/circleci/node?version=1.1.6#commands-with-cache
Answers:
username_1: Is this also needed for yarn workspace projects where there can be multiple `node_modules` folders?
username_2: @gmemstr I see that you added the hacktoberfest label. Can you add the topic label as well?
username_3: +1
```
steps:
- save_cache:
key: node-deps-<<parameters.cache-version>>-<<#parameters.include-branch-in-cache-key>>{{ .Branch }}-<</parameters.include-branch-in-cache-key>>{{ checksum "/tmp/node-project-lockfile" }}
paths:
- <<parameters.cache-path>>
```
There might be different _paths_ you may be interested in caching in your setup (like triggering pipelines through Circle hooks pre-building static stuffs), workspaces projects and other use cases would be benefit just exposing the same param type it's used on `paths`, a list.
username_4: I may be missing something, but it seems like this would be straightforward to support since `save_cache` accepts an array of paths.
username_5: Unfortunately, this is not the case as we do not have the ability to create array parameter types. I have created an issue for this here: https://ideas.circleci.com/cloud-feature-requests/p/support-array-of-type-parameters-1
username_4: That makes sense. How feasible would it be to split a delimited string as a workaround in the meantime?
username_5: @username_4 I don't believe it's possible, since we would have to be able to split the string through a custom script. I think for now if someone requires multiple cache paths they should just be implementing the cache themselves. I would like to see this functionality available some day and will advocate for the support array type.
Status: Issue closed
username_5: Going to close this for now as we do not have the features necessary. If this becomes possible in the future we will implement it! |
philipheimboeck/gps-hawk | 116590763 | Title: Recreate the view
Question:
username_0: When rotating the screen or switching apps, the state of the view will be lost. We should save it and recreate it.
This might help
http://developer.android.com/training/basics/activity-lifecycle/recreating.html
Answers:
username_0: should be fixed by 1e70fbf |
RocketScienceProjects/BlueKing | 169234495 | Title: Add unit test
Question:
username_0: @username_1 can you try to add a unit test to the code?
Answers:
username_1: @username_0 Yeah I'll do it. Can i do it this weekend?.
username_0: @username_1 Thanks! trying to use surefire from maven, once that is in place
username_1: @username_0 Done!. Check it out!
Status: Issue closed
username_0: Awesome, let me take a look tommorow and will update it back. @username_1 .. Then we should reconnect
username_0: Done with the test execution via the jenkins. Had to update the pom.xml for the test case directory. @username_1 |
barrybecker4/ScalphaGoZero | 562183676 | Title: upgrade to latest libraries
Question:
username_0: Things to upgrade:
java 8 -> 11
scala 2.12 -> 2.13.x
nd4j 1.0.0-beta3 -> 1.0.0-beta6
etc
Answers:
username_0: Note: ND4J is not yet compatible with scala 2.13.1 as of 1.0.0-beta6
username_0: I updated scalatest and use 1.0.0-beta3 of nd4j.
username_0: Travis CI build now uses opensdk11. |
harbourlab/SparK | 799685380 | Title: advice for plotting methylation data
Question:
username_0: Hello again,
I am trying to visualize methylation data using sparK. I have bedgraph files with values ranging from 0 to 1. However when I plot the region it shows almost no signal though there are values ranging from 0 to 1. I have tried no smoothing and smoothing. Do you have any advice on getting this information to translate well onto the plot. My plot and code I used are below. Thank you!
<img width="304" alt="sparK_plot" src="https://user-images.githubusercontent.com/42899544/106662054-89086e00-6567-11eb-90d6-2e7e0ee44729.png">
```
python SparK.py \
-pr chr4:103410000-103450000 \
-cf ${INPUT_DIR}/WGBS_EU_Flu_avg.bw.bedGraph \
-tf ${INPUT_DIR}/WGBS_EU_NI_avg.bw.bedGraph \
-tg 1 \
-cg 1 \
-gl WGBS \
-l Flu NI \
-gtf /ref/Homo_sapiens.GRCh37.Ensembl75.gtf \
-dg NFKB1 \
-cs 1.0 \
-o spark.NFKB1.WGBS
```
Answers:
username_1: Hi! Have you tried plotting it without the „-cs“ option? If yes how does the output look like?
Sent from my iPhone
username_0: yes! Here is the output. I have looked at the bedgraph file in that region and i know that there are values greater than 0.2 in this region.
<img width="363" alt="spark_no_cs_flag" src="https://user-images.githubusercontent.com/42899544/106675292-83685380-657a-11eb-9cb5-a3fab70ea476.png">
I wonder if the problem is the format of my bedgraph.
chr4 100000275 100000276 0.977952380952381
chr4 100000482 100000483 0.971
chr4 100000656 100000657 0.969238095238095
chr4 100000807 100000808 0.960619047619048
chr4 100000997 100000998 0.963095238095238
chr4 100001015 100001016 0.954047619047619
Perhaps the gaps between cpg sites are eliminating the signal?
Thanks!
username_1: Would you mind sharing the bedgraph file of that region? Just that region is enough, doesn’t have to be the entire one.
Sent from my iPhone
username_0: Here they are:
[chr4_sub.WGBS_EU_NI.bedGraph.txt](https://github.com/username_1/SparK/files/5920090/chr4_sub.WGBS_EU_NI.bedGraph.txt)
[chr4_sub.WGBS_EU_Flu.bedGraph.txt](https://github.com/username_1/SparK/files/5920091/chr4_sub.WGBS_EU_Flu.bedGraph.txt)
Thank you for your help!
username_1: Hi! No problem. This is actually a very interesting case. This is a problem of the data type. Methylation data is basically a number of sharp peaks, every one only one base pair long. If you plot a large region like 100 kb, then you have way more data points than pixels in your plot. There are different ways of dealing with that, and SparK will calculate the average methylation per pixel of the plot. Say you have to squeeze 10 data points into one pixel of the plot, and you only have one methylation site with a value of 1.0, then you will end up with an average value of 0.1. This is what you are experiencing here. Your data is there, just with lower values. However, I would actually consider this the better way of plotting things! See the attached example. I have plotted a larger region with IGV, you can see that the data ranges to 1.0, which is what you were expecting. However, the data is so highly clustered that it really doesn't tell you anything at all. See for instance the red square (see 1). If we zoom in once, we suddenly see that what looked as one massive block of methylation is actually two distinct sites (see 2), that appear to have the same amount of methylated sites. If we zoom in again, we notice that those two sites are not the same at all, but in fact very different in amount of methylation that is present (see 3). So even zooming in once doesn't give us meaningful data. Now look at the SparK plot as a comparison. While the values don't go up to 1, SparK plots meaningful data, even with the most zoomed out version. You can make out the peaks, and immediately see that this region contains a massive peak, aka accumulation of individual methylation sites, which you can not see with the IGV plot.
<img width="612" alt="Screen Shot 2021-02-03 at 9 56 41 PM" src="https://user-images.githubusercontent.com/29413858/106839708-c0ad0e00-666c-11eb-8040-0e7bc4a2273e.png">
username_0: Hi, thanks for looking into what is happening - very interesting! Yes, you may use the plot on the page. Thanks for the help.
username_1: Thanks! Let me know if you need more help! :)
Status: Issue closed
username_2: This function of Spark is meaning for representing the type of data, especially for regions with intensive or clustered points. However, this function can not show real distribution for our data. We have another type of methylation. It occurs sparsely for some region, so it is different from WGBS. See the example below:

The raw bedgraph data related to the eight points is listed as follows:
chr4 55527803 55527804 100
chr4 55527872 55527873 97.6374
chr4 55528238 55528239 88.1444
chr4 55530128 55530129 85.4848
chr4 55530492 55530493 93.8322
chr4 55530552 55530553 77.9618
chr4 55530798 55530799 88.2967
chr4 55532045 55532046 90.9048
You can see that the 1st and 2nd point have significantly distinct height, but 2nd and 7th points have similar height. Also think about the 8th point. So, I wonder if there is a way just to show the raw data?
My command is `python $(which SparK.py) -pr chr4:55527304-55532546 -cf test.bedgraph -ps all -gtf gencode.vM25.annotation.gtf -o test`
username_1: Triage notifications on the go with GitHub Mobile for iOS<https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapps.apple.com%2Fapp%2Fapple-store%2Fid1477376905%3Fct%3Dnotification-email%26mt%3D8%26pt%3D524675&data=04%7C01%7CStefan.Kurtenbach%40med.miami.edu%7Cca0eba1ac8fe43bc9d9208d9e1cebe7f%7C2a144b72f23942d48c0e6f0f17c48e33%7C0%7C0%7C637789100371621061%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=1pMQo6zCz2dqjHKIxX9azQJFcbdRNF881Iaw40Yd9og%3D&reserved=0> or Android<https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fplay.google.com%2Fstore%2Fapps%2Fdetails%3Fid%3Dcom.github.android%26referrer%3Dutm_campaign%253Dnotification-email%2526utm_medium%253Demail%2526utm_source%253Dgithub&data=04%7C01%7CStefan.Kurtenbach%40med.miami.edu%7Cca0eba1ac8fe43bc9d9208d9e1cebe7f%7C2a144b72f23942d48c0e6f0f17c48e33%7C0%7C0%7C637789100371621061%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=k%2FjqIToF1m1d439XnJHgsrTRYGCVvgiNzunZw%2BUmsMQ%3D&reserved=0>.
You are receiving this because you modified the open/close state.Message ID: ***@***.***> |
onsi/gomega | 616708572 | Title: Option to abort `Eventually/Consistently` also for functions
Question:
username_0: [Documentation](https://username_3.github.io/gomega/#making-asynchronous-assertions) mentions that:
```
Note: Eventually and Cusername_3stently only exercise the MatchMayChangeInTheFuture method if they are passed a bare value. If they are passed functions to be polled it is not possible to guarantee that the return value of the function will not change between polling intervals. In this case, MatchMayChangeInTheFuture is not called and the polling continues until either a match is found or the timeout elapses.
```
which is reflected in the code: https://github.com/username_3/gomega/blob/1a3d249459a44387a05ca2d2c2b3d5f3db596dcb/internal/asyncassertion/async_assertion.go#L98-L100
There are use-cases where aborting a function can still be usefull - imagine spinning EKS cluster on Amazon, with `Eventually` checking output of the function. Once the cluster transitions into "CREATE_FAILED" state - it will not heal itself, and Gomega forces users to wait the timeout as there is no way to abort the test.
Can it be changed so that if the `MatchMayChangeInTheFuture` is defined for the matcher, it is executed no matter if the `actual` is a function or a value so that people can decide on their own?
Thanks in advance.
Answers:
username_1: I think I understand what you're trying to achieve. I agree that there's no point waiting for a timeout when the result cannot change.
I think the rationale of the current implementation is that an Eventually() or Cusername_3stently() will poll until the timeout, unless it can be **absolutely certain** that the result will not change. If there is any doubt then it will continue polling. If we change that, it could allow for subtle bugs. For instance the `Exit` matcher knows that the result cannot change once the processes has finished. But a function in the `Eventually()` does not guarantee to return the same process each time, so the assumption could be wrong. Similarly the `Receive()` matcher knows that the result cannot change once a channel is closed. But a function may return a different channel each time, so the assumption is not safe.
Could something like this work?
```go
deploymentFinished := func() bool { ... }
deploymentSuccessful := func() bool { ... }
Eventually(deploymentFinished).Should(BeTrue())
Expect(deploymentSuccessful).To(BeTrue())
```
That way the logic about whether the result can change is handled by the user-defined function.
username_0: I understand where you are coming from and the workaround you suggested would work in the use case I have. Thank you very much for the detailed explanation!
Just as a note, I am of the opinion that users should be given freedom, even when there is a chance of fatal mistakes. E.g. lots of Linux tools offer `--yes` option for scripts even though it also allows users to make fat-finger kind of mistakes and wipe their hard drives out. But again - I just to share a different point of view :)
Would it be possible to put the justification you have given and the workaround code into the documentation, somewhere near the entry for [MatchMayChangeInTheFuture](https://username_3.github.io/gomega/#making-asynchronous-assertions)? I find it super helpful and for users like me who just started with Gomega, it could save a lot of time.
Thanks in advance!
username_1: Thank you @username_0. I'll improve the docs.
username_2: For whatever it's worth, I just ran into this issue as well, and the example above (EKS cluster) is a great analog for the case I was testing.
I'll use the workaround for now but it would be nice if `Eventually` (or an equivalent function?) had the ability to break out early if the function being called instructed it to do so.
username_3: Am not at my computer so I’ll need to check to be sure... but i think if
you panic in the function it will abort.
username_2: @username_3 that seems a big hammer to use just to fail a test? I'm definitely not an expert on best practices though, so open to being corrected on that understanding.
username_4: ```
```shell
Running Suite: Example
======================
Random Seed: 1611101262
Will run 1 of 1 specs
trying again...
trying again...
trying again...
trying again...
trying again...
cleaning up in a deferred function
cleaning up in AfterEach
• Failure [5.002 seconds]
When running
/home/username_4/projects/ginkgo-experiments/eventually/example_test.go:12
should fail fast from Eventually [It]
/home/username_4/projects/ginkgo-experiments/eventually/example_test.go:14
Timed out after 5.001s.
Expected
<bool>: false
to be true
/home/username_4/projects/ginkgo-experiments/eventually/example_test.go:24
------------------------------
Summarizing 1 Failure:
[Fail] When running [It] should fail fast from Eventually
/home/username_4/projects/ginkgo-experiments/eventually/example_test.go:24
Ran 1 of 1 Specs in 5.003 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestExample (5.00s)
FAIL
Ginkgo ran 1 suite in 5.594551988s
Test Suite Failed
```
username_5: instead of setting an env variable and reading it, can we do the check based on the returned error.
like "nil" representing "success - continue" can we not have an err for "failure - continue/abort" and "failure - retry"
i am just starting here, please correct me if i am wrong |
xmlunit/xmlunit | 271936123 | Title: NoSuchElementException from CompareMatcher#describeTo(Description)
Question:
username_0: One of our integration tests fails with a `NoSuchElementException` escaping from `CompareMatcher#describeTo(Description)`. We use org.xmlunit:xmlunit-matchers:2.5.0.
In the debugger, I can see that the matcher's `diffResult` is `!= null`, but `diffResult.hasDifferences() == false`. Since `describeTo(Description)` only checks for `diffResult == null`, the code flow enters `ComparisonMatcher#firstComparison()` where the subexpression `diffResult.getDifferences().iterator().next()` throws the `NoSuchElementException`.
The fix seems easy: Simply test for `diffResult == null || !diffResult.hasDifferences()` at the beginning of `CompareMatcher#describeTo(Description)`.
Answers:
username_1: Thanks a lot.
Given this hasn't come up before it seems `describeTo` usually isn't invoked if `matches` returns `true`, I wonder what you are doing to trigger this. :-)
Nevertheless this certainly is a bug. It's fixed on a branch of which I intend to cut 2.5.1 during the weekend.
username_0: Thanks.
Since you wondered: I encountered the bug in a rest-assured body check that calls a hierarchy of delegating matchers:
```
// ...
.body(isDocumentFrom(getRequestTarget())
.thatMatches(
allOf(
metadataIsEqualTo(expectedMetaElement),
baselineContent(isSimilarTo(baselineControl)),
nativeContent(isSimilarTo(nativeControl)))));
```
username_1: Thank you. This is a slightly different incarnation of #81 - sorry for not catching the problem back then.
I've published new 2.5.1-SNAPSHOT artifacts if you need the fix before the weekend.
Status: Issue closed
username_1: carved out some time today, 2.5.1 has just been released, I'll officially announce it later.
username_0: Great, thanks a lot! |
material-components/material-components-ios | 368777540 | Title: [ActionSheet] Apply component themer to all examples
Question:
username_0: Currently all of the examples for MDCActionSheetController use the MDCActionSheetColorThemer and MDCActionSheetTypographyThemer. In #5345 a component themer was added, this should be applied to all examples.
- [ ] Objective C example _ActionSheetTypicalUse.m_
- [ ] Swift example _ActionSheetSwiftExample.swift_<issue_closed>
Status: Issue closed |
happyfoxinc/helpstack-android | 67171419 | Title: App crashes when instantiating HelpStack
Question:
username_0: We tried integrating HelpStack by following steps given on GitHub, but we kept getting the following errors:
04-03 13:54:22.054 4638-4638/com.playerline.android E/AndroidRuntime﹕ FATAL EXCEPTION: main
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.playerline.android/com.tenmiles.helpstack.activities.HomeActivity}: java.lang.NullPointerException
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1967)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1992)
at android.app.ActivityThread.access$600(ActivityThread.java:127)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1158)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:4448)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:511)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)
at dalvik.system.NativeStart.main(Native Method)
Caused by: java.lang.NullPointerException
at com.tenmiles.helpstack.activities.HSActivityParent.onCreate(HSActivityParent.java:48)
at com.tenmiles.helpstack.activities.HomeActivity.onCreate(HomeActivity.java:46)
at android.app.Activity.performCreate(Activity.java:4465)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1931)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1992)
at android.app.ActivityThread.access$600(ActivityThread.java:127)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1158)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:4448)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:511)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)
at dalvik.system.NativeStart.main(Native Method)
I'm guessing it has something to do with the App Theme, which in our case is Theme.AppCompat.NoActionBar. Should I be looking at some other area?
Answers:
username_1: Hey. Yeah. I verified and this does happen with NoActionBar. I guess this is because NoActionBar is available only on API v20+ but HelpStack has been built & tested all along only for version 19.
We make quite a few calls on methods chained to getSupportActionBar() without checking it for null, which is why this bug appears.
Possible solutions:
1. HelpStack should check all calls of methods chained to getSupportActionBar() and allow them to run only if getSupportActionBar() != null.
2. You can use a different App Theme until this problem is fixed.
Status: Issue closed
username_3: Hi, I am also facing this issue. I am using 1.1.2 version of helpstack and have tried a different theme for HelpStack and mentioned the respective activities in manifest. [As here](https://github.com/happyfoxinc/helpstack-android/issues/35) |
thephpleague/oauth2-client | 176144605 | Title: Move documentation site into master branch?
Question:
username_0: A [recent update by GitHub](https://github.com/blog/2233-publish-your-project-documentation-with-github-pages) allows you to configure project documentation publication to GitHub Pages from a sub-directory instead of the `gh-pages` branch.
Is this something we are interested in doing with this project? It may reduce the need for two pull requests when adding providers to documentation.
Answers:
username_1: I'm on board with that. Then we add that dir to the `.gitattributes` file for `export-ignore`. Thoughts, @username_2?
username_2: I concur with @username_1.
Status: Issue closed
|
desihub/desisim | 201637359 | Title: Create function to build spectra from targets+truth information
Question:
username_0: The function should be something like this (suggested by @sbailey )
```python
def get_target_spectra(targets, truth, wave=None):
'''
Returns true flux, wavelengths, and template metadata for input targets
Args:
targets: rows of target selection table
truth: row-matched truth information corresponding to targets
Optional:
wave: wavelength array to sample the output templates
Returns (flux,wave,meta) analogous to desisim.templates.XYZ.make_templates()
flux: 2D[ntarget, nwave] array of flux [erg/s/cm2/Angstrom]
wave: 1D[nwave] array of sampled wavelengths [Angstroms]
meta: metadata table for these targets
Note:
targets and truth come from input mocks via
desitarget.build.select_mock_targets().
TODO:
Document exactly which columns of `targets` and `truth` are needed.
'''
```
Answers:
username_0: To generate Lya spectra we would also need the information linking back to the mock files (the forest is stored there). A new column that could be added to `truth` should be `MAG_G`, as it is needed to complete the full spectrum using `desisim.templates.QSO`.
username_0: `ZMETAL`, `AGE`, `TEFF`, `LOGG` and `FEH` are other columns that could be added to `truth` and facilitate the creation of `MWS` spectra.
username_1: The ```truth``` catalog also needs to have the apparent magnitude of each source, ```MAG``` which is used to normalize the spectra. Unfortunately ```desisim.templates``` also needs to know the normalizing filter name, which would be annoying to have as a separate column (since our mocks rely on at most 2-3 different filters).
One possibility would be to use different suffixes on ```MAG```, for example, ```MAG_R``` would map to ```normfilter = decam2014-r```, ```MAG_R_SDSS``` would map to ```normfilter = sdss2010-r```, and so forth. But then we end up with a bunch of empty values for many sources, which is even more wasteful.
username_1: I think this issue has been closed / superseded by developments in `desitarget.`
Status: Issue closed
|
pyrocms/pyrocms | 208218885 | Title: [Pyro Installer] -- Does not run on Windows
Question:
username_0: ```
$ pyro new PyroCMS
Installing Pyro...
Warning: proc_open(): CreateProcess failed, error code - 267 in C:\Users\Sergio\AppData\Roaming\Composer\vendor\symfony\process\Process.php on line 310
[Symfony\Component\Process\Exception\RuntimeException]
Unable to launch a new process.
new [--tag [TAG]] [--dev] [--] [<name>]
```
Triggers due to proc_open() having issues with relative paths in Windows.
Apparently a `config.json` can be created to set paths for `home` and `cache-vcs-dir` to there actual locations to fix this.
More info here: https://github.com/composer/composer/issues/1995#issuecomment-24180597
Status: Issue closed
Answers:
username_1: This kinda defaults the purpose unfortunately having to point to the project folder composer file..
username_2: I have the same issue... Why is this issue still not fixed?
username_0: You don't need to use the installer, use `composer create-project pyrocms/pyrocms` instead.
username_2: @username_0 Yeah, I am trying since today morning to get this up and running, I tried that already 3 times. Once with command like you, then with Command --clean source or sth. and another time to just do something. In the end I called mydomain.com/installer and get server error 500 with IIS.
then I used the CLI installer which finally installed it. Homepage shows fine, when accessing backend it throws errors over errors. So now I am at this part trying it another time with the installer method....
I cant believe the installer has issues like proc_open does not work. That clearly lets me think of to not use pyrocms after one day of figuring out how this shit works |
cfpb/hmda-pilot | 49750367 | Title: Display Macro Quality Edit report
Question:
username_0: Child of Epic Loan Level Edit Report Detail
As a user I want to be able to view the macro edits that failed
so when a macro edit fails, I am able to review the failed edits
**Given** that the macro edit checks were run on my HMDA data filing
**When** I click on a View Macro Edits from the Dashboard
**Then** I see a list of the failed Macro Edits
Acceptance Criteria
Once the financial institution's data passes all the syntactical and validity edits, the system allows the user to move onto the quality and macro edits. The user should see a summary page with all of the macro edits that failed (except Q595 and Q029) and the counts of the failed macro edits. The user should be able to click on a failed macro edit to see the detail for that particular macro quality edit.
Tasks for story
- [x] Design loan level edit report for macro quality edits @vizui
- [x] Display loan level edit report for macro quality edits
- [x] Return to the Summary view
- [x] Navigate to a different error
- [x] Paginate the list of edit errors for current report
- [x] Select number of errors to display at a time
Status: Issue closed
Answers:
username_1: Child of Epic Loan Level Edit Report Detail
As a user I want to be able to view the macro edits that failed
so when a macro edit fails, I am able to review the failed edits
**Given** that the macro edit checks were run on my HMDA data filing
**When** I click on a View Macro Edits from the Dashboard
**Then** I see a list of the failed Macro Edits
Acceptance Criteria
Once the financial institution's data passes all the syntactical and validity edits, the system allows the user to move onto the quality and macro edits. The user should see a summary page with all of the macro edits that failed (except Q595 and Q029) and the counts of the failed macro edits. The user should be able to click on a failed macro edit to see the detail for that particular macro quality edit.
Tasks for story
- [x] Design loan level edit report for macro quality edits @vizui
- [x] Display loan level edit report for macro quality edits
- [x] Return to the Summary view
- [x] Navigate to a different error
- [x] Paginate the list of edit errors for current report
- [x] Select number of errors to display at a time
username_1: This view should be completed based on the current wireframes. However, the engine has yet to produce any macro errors yet that return the calculated properties. Until the engine starts returning valid properties, nothing is being displayed. So I'm going to block this story until we can actually confirm that something is being displayed correctly.
Status: Issue closed
|
utsw-bicf/pandiseased | 722626559 | Title: change seach filters for Nephrectomy type
Question:
username_0: Right now it looks like this:

- [x] Change test date total to Radical
- [x] Change real date total to Radical
Answers:
username_1: Updated for real data.
Status: Issue closed
|
archco/moss-ui | 247292306 | Title: Reduce "mounted" process in dropdown.
Question:
username_0: -
Status: Issue closed
Answers:
username_0: 이 방법으로는 부족하다.
username_0: dropdown을 포함해 다른 components에서도 `mounted()`의 동작을 최소화 하자.
일단 구조를 바꾸지 않아도 될 정도에서 `mounted()` 내에 `querySelector`만 안해도 되도록 수정하였다.
username_0: Style 부분이 포함된 클래스를 component에 포함시키면 component rendering이 될 때까지 적용이 안된다. 이를 해결하자.
- 방법1: [v-cloak](https://vuejs.org/v2/api/#v-cloak)을 이용해서 필요한 부분을 가리도록한다.
- 방법2: component에서 style부분을 배제시킨다.
Status: Issue closed
|
jakubkulhan/bunny | 132066978 | Title: Version 1.0 - PHP7-only
Question:
username_0: - return types
- scalar type hints
Answers:
username_1: :+1:
username_2: No problem for @slevomat. We already run on PHP 7. It's a pity that ReactPHP does not have any plans to move to PHP 7 and seems mostly unmantained.
username_3: :+1: here. No point in all this userland async without 7's performance
Status: Issue closed
|
gotson/komga | 842554385 | Title: [Feature Request] Reset all Metadata for a book/series
Question:
username_0: ### Is your feature request related to a problem? Please describe.
It should be possible to reset all Metadata for a book or series with one click.
### Describe the solution you'd like
In best case you can choose in the menue an item to reset all metadata for a book or series.
After conformation all data should be reseted.
### Describe alternatives you've considered or other apps that can do what you want
A clear and concise description of any alternative solutions or features you've considered.
### Additional context
Add any other context or screenshots about the feature request here.
Answers:
username_1: Can you describe clearly what you mean by reset metadata?
username_0: Every book or series could have metadata like titel, author, release date, summery, status, read state, publisher, genre, tags, also the locked status.
It would be nice to reset or clear this data with one click.
So all fields are empty and could be refilled (manually or by an scan)
username_1: Not sure what purpose it would serve?
If you want to edit manually, just edit manually.
If you want to refresh the metadata, there is already a function for that.
username_0: Hello username_1,
yes it is possible to fill it manually.
And yes, it is possible to do it manually.
The idea behind this is to make it easier.
2 examples:
I already added metadata to an series.
Now I found out, the information like tags, summery, authors, release date was not optimal.
So I start again to fill all meta data with the better description.
At the moment I move the series on an other place, than make a rescan of the directory to clean all metadata.
Than I move the directory back to the right place.
Than I start again to fill in all metadata.
This is possible but nicer should be a button to reset all data for books or series.
Other example: I filled in automaticly via API metadata. I found out, I made a mistake in the script.
Now I can go forward like in example 1 (move, scan, move back) to make sure, to delete ALL metadata.
Or we have a small menu item.
I think this feature has not the highest priority but I made a lot of experiments with metadata in the last days and found out, this feature would help :)
Maybe there is an posibility via API to reset all metadata and not only to set the data to "".
Only a small wish, nothing urgent :-)
MKH
username_1: I don't understand. In both your examples you are going to refresh metadata after resetting, so there's no point of resetting. Or did I miss something?
username_0: The help here is to reset ALL metadata because there was a mistake to fill out.
To make sure, to clean up all metadata it would be faster to have one button/menu item to reset all metadata with one click.
metadata has some entries like date, tags, year, volume, authors ... and I want to make sure, I don't forget to delete an item.
And sometime I don't wont to fill the data again till I have a better quality of the metadata - so I want to let the data empty.
username_2: I want to keep my reading history and all manual edits, so rather not remove and rescan the libraries.
username_1: ### Is your feature request related to a problem? Please describe.
It should be possible to reset all Metadata for a book or series with one click.
### Describe the solution you'd like
In best case you can choose in the menue an item to reset all metadata for a book or series.
After conformation all data should be reseted.
### Describe alternatives you've considered or other apps that can do what you want
A clear and concise description of any alternative solutions or features you've considered.
### Additional context
Add any other context or screenshots about the feature request here.
username_0: *Push* |
crate/docker-crate | 1079731828 | Title: Latest version not available Docker Registry
Question:
username_0: Hey guys,
as the log4j vulnerability has been fixed yesterday in crate 4.6.6 we would really like to update our kubernetes deployment of crate.
But unfortunately the latest version is not available in docker hub (https://hub.docker.com/_/crate, latest available tag is 4.6.5).
Could you please push the version so we can close open vulnerability in our env? (yes, we've set `LOG4J_FORMAT_MSG_NO_LOOKUPS` already)
Answers:
username_1: Dear @username_0,
thank you for writing in.
After publishing the CrateDB 4.6.6 release to the testing channels yesterday (https://github.com/crate/crate/pull/11971), the image `crate/crate:4.6.6` is already available. Submitting a request to publish the official image `crate:4.6.6`, like https://github.com/docker-library/official-images/pull/11331, will probably happen tomorrow.
With kind regards,
Andreas.
username_0: Hey @username_1 ,
thank you for the quick response! I wasn't aware of the crate/crate image in the docker registry.
Status: Issue closed
username_1: Hi again,
the process of publishing the CrateDB 4.6.6 "official" image at `crate:4.6.6` is on its way, see https://github.com/docker-library/official-images/pull/11518.
With kind regards,
Andreas.
username_1: Hi @username_0,
CrateDB 4.6.6, which mitigates CVE-2021-44228 by bumping to log4j2 2.15, has been published to the "stable" release channels, the corresponding official container image `crate:4.6.6` is also available now.
With kind regards,
Andreas. |
grinnellplans/GrinnellPlans | 113420494 | Title: Drop old interfaces and migrate CSS
Question:
username_0: A lot of work has already occurred on this, but it doesn't have a ticket open yet. In short, we plan to drop support for the older interfaces and use something resembling Postmodern for all users. @username_1 put in a Herculean effort porting most or all of the standard stylesheet options to work with this new interface, so we can migrate everyone who's using those. Then the only drawback is that anyone using a custom stylesheet that depends on the old interfaces will have it break, which hopefully we can minimize.
* [ ] Finish porting standard stylesheets
* [ ] Make [base stylesheet](https://github.com/username_1/planscss/blob/css/html/styles/modern.css) readily available + instructions for using it
* [ ] Compile list of [stylesheets in use and dependent on old interfaces](https://groups.google.com/forum/#!topic/grinnellplans-development/jtgwbmNlWdI)
* [ ] Reach out to those authors and see if we can get them to update.
Answers:
username_0: @username_1, could you clarify for me how far the CS porting is? Blue, Jolly and Parchment are not yet ported, is that correct? And is all of your work found in https://github.com/username_1/planscss ?
username_1: The CSS porting is complete for every built-in stylesheet. Yes, all my work is in the repo you linked, but **not in branch master**. The new sheets are in the `css` branch (I used `master` exclusively to code up a preview tool in PhantomJS).
This work was committed to PHP Plans in a [single giant patch](https://github.com/grinnellplans/grinnellplans-php/commit/4512e3f39eb0abe199f4d1f676bfdabb9640ebbf) on March 15. It was partially reverted [a month later](https://github.com/grinnellplans/grinnellplans-php/commit/2ad3d2b587112fed103cb924408981218bb4c73a) because someone really hated having the pre-2009 plans logos back. I think it must have been a younger person who didn't know that those were *original*, *vintage* logos, painstakingly recovered from the dank depths of the SVN repo. Anyway if you share that person's opinion you should pull from the `nologos` branch instead (check the [network graph](https://github.com/username_1/planscss/network)).
I don't have much time to work on plans these days but I hope this helps you guys.
username_0: @username_1 thank you, that was exactly the information I needed. I'll pull in all your latest sheets, and uhhhhh we'll see about those vintage logos.
Status: Issue closed
username_0: I'm going to break out further work on this into tickets under a new milestone: #139 and #140. |
GovDataOfficial/DCAT-AP.de | 808738717 | Title: Änderungen der Eigenschaften von dct:temporal und dct:PeriodOfTime
Question:
username_0: Die Verbindlichkeit der dcat:Dataset-Eigenschaft dct:temporal wurde von optional auf empfohlen erhöht.
Die Klasse dct:PeriodOfTime verzichtet künftig auf schema:startDate und schema:endDate zu Gunsten von dcat:startDate und dcat:endDate. Die Range ist rdfs:Literal typed als xsd:date oder xsd:dateTime, die Verbindlichkeit „empfohlen“.
Außerdem wird als optionale Eigenschaften time:hasBeginning und time:hasEnd eingeführt. Deren Range ist „time:Instant“, damit kommen diese Eigenschaften in Frage, falls z.B. ein vom Standard abweichender Kalender verwendet werden soll.
**die Klasse dct:PeriodOfTime und dct:temporal**
```
class "dct:PeriodOfTime" <<GEÄNDERT: recommended>> {
GELÖSCHT: <<optional>> schema:startDate ~> rdfs:Literal [0..1]
GELÖSCHT: <<optional>> schema:endDate ~> rdfs:Literal [0..1]
NEU: <<recommended>> dcat:startDate ~> rdfs:Literal [0..1]
NEU: <<recommended>> dcat:endDate ~> rdfs:Literal [0..1]
NEU: <<optional>> time:hasBeginning ~> time:Instant [0..1]
NEU: <<optional>> time:hasEnd ~> time:Instant [0..1]
}
"dcat:Dataset" --> "*" "dct:PeriodOfTime" : <<recommended>> dct:temporal
```

Eine Lösung zur Umsetzung in DCAT-AP.de wäre die aktuell gültigen Eigenschaften schema:startDate und schema:endDate auf „deprecated“ zu setzen und durch die äquivalenten dcat-Eigenschaften zu ersetzen.
Status: Issue closed
Answers:
username_1: Mit dem Release von [DCAT-AP.de 2.0](https://www.dcat-ap.de/def/dcatde/2.0/spec/) umgesetzt. |
Yarulika/Restaurant | 554704436 | Title: README.md
Question:
username_0: its good idea but, usually we put in readme file couple of points/ look how to make Headlines, points, text..
Overview
Getting it to work
Requirements
Build the app
Run the spring app
Run Tests and generate test-coverage report
Swagger UI
Feature work
Technical comments |
websockets/ws | 762312969 | Title: Server crashes with Invalid WebSocket frame
Question:
username_0: Duplicate of closed issue: https://github.com/websockets/ws/issues/1777
#### Description
The server crashes on an invalid WS frame. I have the impression this is not intended behaviour. I've been reading the code at `Receiver.controlMessage` and `Receiver.receiverOnError` but can't seem to find a reason for it to throw an Error besides emitting it. If I would know how to catch this error I could work around the problem.
#### Reproducible in:
- Node.js version(s): 12.18.3
- OS version(s): Ubuntu 20
#### Steps to reproduce:
Send a 1006 frame to the server
#### Expected result:
Server emits error
#### Actual result:
Error thrown and app crashes
#### Attachments:
`events.js:291
throw er; // Unhandled 'error' event
^
RangeError: Invalid WebSocket frame: invalid status code 1006
at Receiver.controlMessage (/app/node_modules/ws/lib/receiver.js:464:18)
at Receiver.getData (/app/node_modules/ws/lib/receiver.js:350:42)
at Receiver.startLoop (/app/node_modules/ws/lib/receiver.js:143:22)
at Receiver._write (/app/node_modules/ws/lib/receiver.js:78:10)
at doWrite (_stream_writable.js:403:12)
at writeOrBuffer (_stream_writable.js:387:5)
at Receiver.Writable.write (_stream_writable.js:318:11)
at Socket.socketOnData (/app/node_modules/ws/lib/websocket.js:900:35)
at Socket.emit (events.js:314:20)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:272:9)
at Socket.Readable.push (_stream_readable.js:213:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23)
Emitted 'error' event on WebSocket instance at:
at Receiver.receiverOnError (/app/node_modules/ws/lib/websocket.js:805:13)
at Receiver.emit (events.js:314:20)
at errorOrDestroy (internal/streams/destroy.js:108:12)
at onwriteError (_stream_writable.js:418:5)
at onwrite (_stream_writable.js:445:5)
at Receiver.startLoop (/app/node_modules/ws/lib/receiver.js:152:5)
at Receiver._write (/app/node_modules/ws/lib/receiver.js:78:10)
[... lines matching original stack trace ...]
{
[Symbol(status-code)]: 1002
}`
Answers:
username_1: This is expected. Add a listener for the `'error'` event as per https://github.com/websockets/ws/issues/1777#issuecomment-660803472.
Status: Issue closed
username_1: FYI, it's not the sever that emits the error but the server client (the `WebSocket` object received in the `'connection'` event).
username_0: @username_1 Thanks for having a look. Some context:
The problem was indeed in our application code. The event listeners — including the one for "error" — got removed as part of closing the `WebSocket` and cleaning up. The problem with that approach, however, is the frame the connection receives in response to the close frame `1001`. If that frame causes an error, for example when it is `1006`, the `WebSocket` is going to throw because the listener got removed already...
I solved it by cleaning up listeners only on "close" event. If the `WebSocket` does not get a "close" event somehow, then that can be a slight problem. The client could not close the connection (and keep doing ping / pong). Then it will stay open, even though the server has called `.close()` on the socket. I think the right approach is to stop doing ping/pong for that connection when calling `.close()`, all so that the socket will be terminated eventually after a ping / pong timeout is reached in application code. |
department-of-veterans-affairs/va.gov-team | 713855334 | Title: Access for [individual]
Question:
username_0: Hello @department-of-veterans-affairs/vsp-operations, I'm looking to get SOCKS/AWS access, please kindly. 🙂 My e-QIP adjudication is under way.
Answers:
username_0: Hello @department-of-veterans-affairs/vsp-operations, I'm looking to get SOCKS/AWS access, please kindly. 🙂 My e-QIP adjudication is under way.
username_1: @username_0 Please verify that you have access to the SOCKS proxy when you have a moment.
username_1: **Requesting AWS Access**
User: <NAME>
cc: @Jeff-Hayter
username_0: @username_1 I do have SOCKS access now, thank you!
When will I receive the AWS login url/temp password? cc: @Jeff-Hayter
username_2: Looks like your user was made, but you weren't yet given first time login credentials. Sorry about that. Will get that to you momentarily.
username_2: First time login credentials provided to user.
Status: Issue closed
|
yourWaifu/sleepy-discord | 377480342 | Title: Cannot find headers
Question:
username_0: Hey, i just started learing c++ and i tried to make a bot but i can't compile the project. Here is [console output](https://hastebin.com/wepetesoxo.php), here is my [cmake file](https://hastebin.com/lenoxuxeki.shell), sorry if it's a obvious problem
Status: Issue closed
Answers:
username_0: A just moved all the headers to deps dir |
bazelbuild/bazel | 163556860 | Title: Rust + cc_configure == crash
Question:
username_0: I get a Bazel crash if I add both cc_configure and Rust setup to my WORKSPACE file:
```
git_repository(
name = "io_bazel_rules_rust",
remote = "https://github.com/bazelbuild/rules_rust.git",
tag = "0.0.2",
)
load("@bazel_tools//tools/cpp:cc_configure.bzl", "cc_configure")
load("@io_bazel_rules_rust//rust:rust.bzl", "rust_repositories")
cc_configure()
rust_repositories()
```
```
$ bazel build
java.lang.RuntimeException: Unrecoverable error while evaluating node 'WORKSPACE_FILE:[/Users/username_0/Projects/experiment/Rust]/[WORKSPACE], 2' (requested by nodes 'EXTERNAL_PACKAGE:[/Users/username_0/Projects/experiment/Rust]/[WORKSPACE]')
at com.google.devtools.build.skyframe.ParallelEvaluator$Evaluate.run(ParallelEvaluator.java:1044)
at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:474)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Multiple entries with same key: @bazel_tools//tools/cpp:cc_configure.bzl=com.google.devtools.build.lib.syntax.Environment$Extension@19951b19 and @bazel_tools//tools/cpp:cc_configure.bzl=com.google.devtools.build.lib.syntax.Environment$Extension@19951b19
at com.google.common.collect.ImmutableMap.checkNoConflict(ImmutableMap.java:136)
at com.google.common.collect.RegularImmutableMap.checkNoConflictInKeyBucket(RegularImmutableMap.java:98)
at com.google.common.collect.RegularImmutableMap.fromEntryArray(RegularImmutableMap.java:84)
at com.google.common.collect.ImmutableMap$Builder.build(ImmutableMap.java:295)
at com.google.devtools.build.lib.packages.WorkspaceFactory.execute(WorkspaceFactory.java:190)
at com.google.devtools.build.lib.packages.WorkspaceFactory.execute(WorkspaceFactory.java:175)
at com.google.devtools.build.lib.skyframe.WorkspaceFileFunction.compute(WorkspaceFileFunction.java:112)
at com.google.devtools.build.skyframe.ParallelEvaluator$Evaluate.run(ParallelEvaluator.java:995)
... 4 more
```
Answers:
username_0: It looks like switching from ImmutableMap.Builder to LinkedHashMap works. See:
https://github.com/username_0/bazel/commits/fix-workspace-factory
username_0: I think this is just the duplicate cc_configure call, and doesn't have anything to do with rust.
username_1: change is inflight
Status: Issue closed
|
haikuports/haikuports | 202370505 | Title: Some packages do not have a safe source
Question:
username_0: It is required to have a safe source (single file download, so we can compute an SHA256).
At least the following packages are failing this:
- librecad
- marble
- getconf
Answers:
username_1: librecad has been updated.
Perhaps a feature could be added to the lint check to make unsafe sources show up in a lint check.
Status: Issue closed
|
sanity-io/sanity | 726693539 | Title: Error messages from Popper.js
Question:
username_0: **Describe the bug**
Getting some errors from Popper.js when hovering over validation notifications. The popups are flickering and Popper logs this to the console:
```
PopperJS: an invalid property has been provided to the "undefined" modifier, valid properties are "name", "enabled", "phase", "fn", "effect", "requires", "options"; but "preventOverflow" was provided.
(anonymous) @ validateModifiers.js:62
```
**To Reproduce**
Just upgraded Sanity and it started happening.
**Which versions of Sanity are you using?**
@sanity/base 2.0.1 (latest: 2.0.6)
@sanity/cli 2.0.1 (latest: 2.0.5)
@sanity/color-input 2.0.1 (latest: 2.0.5)
@sanity/components 2.0.1 (latest: 2.0.6)
@sanity/core 2.0.1 (latest: 2.0.5)
@sanity/dashboard 2.0.1 (latest: 2.0.5)
@sanity/default-layout 2.0.1 (latest: 2.0.6)
@sanity/default-login 2.0.1 (latest: 2.0.5)
@sanity/desk-tool 2.0.1 (latest: 2.0.6)
@sanity/google-maps-input 2.0.1 (latest: 2.0.6)
@sanity/production-preview 2.0.1 (latest: 2.0.5)
@sanity/vision 2.0.1 (latest: 2.0.5)
Answers:
username_1: Thank you for reporting! We recently upgraded popper and are aware of this issue. Working on a fix 👍
username_1: This should be fixed now 😊
Status: Issue closed
|
loumloum/VisualDon | 595224958 | Title: Projet: selectionnez vos données
Question:
username_0: La partie qui peut servir est dans `Usage`.
`Usage` ressemble à ça:
```js
{
"app_opens": {
"2018-06-23": 38,
"2018-06-24": 205,
// ...
},
"swipes_likes": {
"2018-06-23": 23,
"2018-06-24": 80,
// ...
},
// ...
}
```
Il y a 6 clés:
```js
[
"app_opens",
"swipes_likes",
"swipes_passes",
"matches",
"messages_sent",
"messages_received",
"advertising_id",
"idfa"
]
```
Elles ne sont pas toutes utilisables mais il y a les données du fichier excel.
Profitez en pour avoir un objet avec toutes les données par date, ça va être plus facile à utiliser.
Comme ça:
```js
[
{
"date": "2018-06-23",
"app_opens": 38,
"swipes_likes": 23,
"swipes_passes": 245,
"matches": 9,
"messages_sent": 13,
"messages_received": 20,
"advertising_id": "",
"idfa": ""
},
{
"date": "2018-06-24",
"app_opens": 205,
"swipes_likes": 80,
"swipes_passes": 1837,
"matches": 31,
[Truncated]
const datasets = Object.keys(usage)
const dates = Object.keys(usage[datasets[0]])
const result = dates.map(date => ({
date,
...datasets
.map(dataset => ({ dataset, value: usage[dataset][date] }))
.reduce((r, { dataset, value }) => ({ ...r, [dataset]: value }), {}),
}))
console.log(JSON.stringify(result, null, 2))
```
Dans le dossier `projet/Data`
```
node fix > usage.json
```
**Effacez `data.json`** |
dart-lang/sdk | 1079054307 | Title: [Windows] DynamicLibrary.open() prints incorrect error code
Question:
username_0: Hi, I believe `status` should be passed to ` Utils::SCreate()`, not `error`:
https://github.com/dart-lang/sdk/blob/6c98722611e93a4b2f9332bdaa1f03df87ba4ed9/runtime/platform/utils.cc#L363-L364
Answers:
username_1: @username_0 thanks for pointing this out, will submit a PR to fix this.
username_1: https://dart-review.googlesource.com/c/sdk/+/223742
Status: Issue closed
|
inaturalist/iNaturalistAndroid | 920821679 | Title: NullPointerException in ObservationEditor.refreshProjectFields
Question:
username_0: https://console.firebase.google.com/u/2/project/inaturalist-ios/crashlytics/app/android:org.inaturalist.android/issues/7eda80db1016eddec554810d0c0bd753
```
Caused by java.lang.NullPointerException: Attempt to invoke virtual method 'int java.lang.Integer.intValue()' on a null object reference
at org.inaturalist.android.ObservationEditor.refreshProjectFields(ObservationEditor.java:4428)
at org.inaturalist.android.ObservationEditor.onCreate(ObservationEditor.java:1132)
at android.app.Activity.performCreate(Activity.java:8121)
```<issue_closed>
Status: Issue closed |
eth-brownie/brownie | 1028266542 | Title: `brownie test` command (and `pytest` command) stopped loadiing current project
Question:
username_0: ### Environment information
* `brownie` Version: 1.16.4
* `ganache-cli` Version: 6.12.2
* `solc` Version: 0.5.0
* Python Version: 3.7.11
* OS: osx
### What was wrong?
Starting two days ago, my entire environment has been messed up out of nowhere.
I can still run `brownie console` just fine and the current project where my current folder is gets loaded into console and I can interact with the contracts from the console just fine.
But running `pytest` and/or `brownie test` no longer loads the current project contracts into my testing environment. Previously I had tests written broken down into subfolders under tests/ folder from the root directory. Now every single test fails.
All tests previously PASSED now ERRORRED, because all interactions with the contracts came back with
```
E ValueError: Expecting value: line 1 column 1 (char 0)
``` |
Azure/azure-cli-extensions | 640989880 | Title: test_azure_firewall_management_ip_config failed.
Question:
username_0: - If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)
### Extension name (the extension in question)
test_azure_firewall_management_ip_config failed. I think released package should fail.
### Description of issue (in as much detail as possible)
E Message: AzureFirewall af1 management IP configuration cannot be added to an existing firewall. Redeploy with a management IP configuration if you want to use forced tunneling support.
msrestazure.azure_exceptions.CloudError: Azure Error: AzureFirewallManagementIpConfigCannotBeAdded
-----
Answers:
username_1: add to S172
username_0: close this issue since it's been solved
Status: Issue closed
|
janlelis/irbtools | 567844775 | Title: WARN: Unresolved or ambiguous specs during Gem::Specification.reset:
Question:
username_0: I get
```
WARN: Unresolved or ambiguous specs during Gem::Specification.reset:
reline (>= 0)
Available/installed versions of this gem:
- 0.1.3
- 0.1.2
WARN: Clearing out unresolved specs. Try 'gem cleanup <gem>'
Please report a bug if this causes problems.
```
after installing version 3.0.2 on
ruby 2.7
rails 6.0.2.1
when running bundler.
Any idea what could cause this?
Answers:
username_1: No real idea, but maybe https://stackoverflow.com/questions/17936340/unresolved-specs-during-gemspecification-reset/18127613#18127613 or https://github.com/rubygems/rubygems/issues/1070 or https://github.com/rubygems/rubygems/issues/1945 can help?
Status: Issue closed
username_0: Cleanup does not help, but I figured out that there is a version of reline installed
username_0: I get
```
WARN: Unresolved or ambiguous specs during Gem::Specification.reset:
reline (>= 0)
Available/installed versions of this gem:
- 0.1.3
- 0.1.2
WARN: Clearing out unresolved specs. Try 'gem cleanup <gem>'
Please report a bug if this causes problems.
```
after installing version 3.0.2 on
ruby 2.7
rails 6.0.2.1
when running bundler.
Any idea what could cause this?
username_0: Updating bundler from 2.0.1 to 2.1.4 fixed it.
Status: Issue closed
username_1: Thank you for figuring this out -- might be helpful for others with the same problem |
bcgov/entity | 1099736700 | Title: Need to scroll to error so that it can be seen
Question:
username_0: **Describe the bug in current situation**
When I receive a validation error after selecting Register and pay, it doesn't jump to the error.
**Impact of this bug**
User may not realize there is an error and just continue to try the button as I did :-).
**Chance of Occurring (high/medium/low/very low)**
high
**Steps to Reproduce**
Steps to reproduce the behavior:
File an amendment and while at the top of the Review screen (before selecting confirm authorization), select "Register and Pay"
**Actual/ observed behavior/ results**
Nothing happens and there is no indication that there is an error
**Expected behavior**
The application should scroll to the error on the screen so that the user can see it.
**Screenshots/ Visual Reference/ Source**
This is what I see on my screen:

This is the error:

Answers:
username_0: This will be resolved by 10521
Status: Issue closed
|
woocommerce/wc-api-php | 325061046 | Title: Get Category Name on Orders request?
Question:
username_0: Using the Orders request is there a way to get the category names for the line_items?
Currently it returns all line items and their names, prices etc but not category name that it belongs to, would love to group them by category to print it out for a picking list. At the moment the order is received in the order the items were placed in the cart.
Original order:
1. red T-shirt
1. blue shorts
1. orange t-shirt
Then group it so it looks like this:
1. red T-shirt
1. orange t-shirt
1. blue shorts
Is this possible? If not what would be a good way to do it?
Answers:
username_1: This doesn't sound like a question related to this REST API library.
For questions about the WooCommerce REST API use https://wordpress.org/support/plugin/woocommerce
Status: Issue closed
|
wingsuitist/brewamp | 117701188 | Title: Sites vs project dir.
Question:
username_0: One question though regarding the tutorial from <NAME> and I was hoping you could help me out.
https://echo.co/blog/os-x-1010-yosemite-local-development-environment-apache-php-and-mysql-homebrew
situation according to your tutorial:
```
~/Sites/project -> http://project.dev
```
But I would like to change that a bit
```
~/Sites/project/htdocs -> http://project.dev
```
How can I change the config that it will show the content of htdocs when accessing the domain project.dev?
Using htdocs will give me the opportunity to create the entire project in ```~/Sites/project```
```
~/Sites/project/bin
~/Sites/project/.modman
~/Sites/project/backup
~/Sites/project/htdocs
~/Sites/project/.git
```
Answers:
username_1: If you change this in httpd-vhosts.conf (there are two places) it should work:
`/VirtualDocumentRoot /LocalSites/%-2+/htodcs`
Status: Issue closed
|
vadim030303/Test-Task | 681119270 | Title: Update your code
Question:
username_0: Hello,
You wrote a good test task!
I need to check parsing result from demo bank.
Save JSON data with parsed accounts and transactions in a file.
You should write a test that checks the data from demo provider bendigo bank.
1. Read about Rspec. It is a framework for test your code.
- https://rspec.info
- https://relishapp.com/rspec/rspec-expectations/docs/built-in-matchers
2. Create `bendigobank_spec.rb` file, describe class BendigoBank.
3. Write specs for `parse_accounts` and `parse_transactions`.
4. Take HTML from bendigo bank and save it in a file (check screenshot).
`html_example = Nokogiri::HTML(File.read('accounts.html'))`

5. Call method for parse accounts and send HTML data. `parse_accounts(html_example)`
6. Check the number of accounts and show an example account in a hash format.
Example:
```ruby
it 'check number of accounts and show an example account' do
html_example = Nokogiri::HTML(File.read('accounts.html'))
accounts = parse_accounts(html_example) # NOTE: It is my example, not a real code.
expect(accounts.count).to eq(5)
expect(accounts.first.to_hash).to eq(
{
"name" => "<NAME>",
"currency" => "USD",
"balance" => 1959.90,
"nature" => "account",
"transactions" => []
}
)
end
```
7. Do the same for transactions.
Good luck!
Answers:
username_1: @username_0 I have created pull request, please review it.
username_0: Looks very good!
Your task is complete.
I will ask other developers to review it.
Add folder `/bin` to `.gitignore`. |
pokanop/nostromo | 1111390867 | Title: Passthrough autocomplete not working correctly in shell
Question:
username_0: In zsh, it seems like `nostromo` commands aren't defaulting or falling back to the shell's autocomplete logic.
I like the [`bat`](https://github.com/sharkdp/bat) tool which is a replacement to `cat`. When replacing the standard command with:
```sh
nostromo add cmd cat bat
```
It works fine but after typing in `cat` and tabbing with the keyboard, it no longer tries to autocomplete the files or folders on the system.
This really sucks for the tool and would be great to figure out why it's happening and how to fix to fallback to shell file system completions. Investigate and produce a fix.
Answers:
username_0: With the latest changes, this appears to be fixed for at least top level commands. Running `cat` as an alias to `bat` autocompletes with file system results. But still seems to not work after a secondary completion. This seems to work for something like `git status` where tabbing shows file system autocompletion.
Status: Issue closed
username_0: The latest upgrade to cobra has fixed this issues and now running multi-level commands also works with file and folder completion. Closing out. |
rivafarabi/deckboard | 479072431 | Title: 1.6.2 Android apps doesn't work
Question:
username_0: I've been using perfectly this app on my OnePlusOne 5T and two Amdroid tablets.
Now with this new version when In try to connect to the server on my PC the app is locked on the circle looping screen and the only thing that I can do is go back to the tittle screen.
I've tried to update the app on those 3 devices, uninstall it, install the server again, etc. with no results at all.
It is a knowed issue? I didn't see on this forum, so, I thought I have to report it.
Thank you for the work you are doing guys, this app is awesome!
Answers:
username_1: Have you check if the app is checked Deckboard or deckboard.exe in Firewall 's Allowed apps settings?
username_0: Yes, it's checked on both private and publc networks, and even disabling it
doesn't work.
It has been working perfectly until now.
--
<NAME>
Twitter: @username_0
username_1: can you send me the log.log file located at C:\Users{USER}\AppData\Roaming\Deckboard?
username_0: Of course
--
<NAME>
Twitter: @username_0
username_0: Hi, did you figured out something?
El sáb., 10 ago. 2019 a las 15:55, <NAME> (<<EMAIL>>)
escribió:
> Of course
>
--
<NAME>
Twitter: @username_0
username_1: still waiting for the log file. You can send it to my email or dropbox link
username_0: log.log
<https://drive.google.com/file/d/0BwmhKVV-3yWgTTFQRmtZTHc4VlBER1BxV0UyeG5scHpOYjdJ/view?usp=drive_web>
it was attatched to the last email, but here you have again.
--
<NAME>
Twitter: @username_0
username_1: I kinda found the problem which I already fixed for next release candidate.
Will update in several hours.
username_0: thank you very much, and thanks again for your work, your app is essential!!
El mar., 3 sept. 2019 a las 13:50, <NAME> (<<EMAIL>>)
escribió:
> I kinda found the problem which I already fixed for next release candidate.
> Will update in several hours.
>
>
username_1: The first release candidate for version 1.7.0 is up https://github.com/username_1/deckboard/releases/tag/v1.7.0-rc1
Tell me if the problem still persists.
username_0: I'm having this error after the update
[image: image.png]
And I've lost all my boards
[image: image.png]
--
<NAME>
Twitter: @username_0
username_1: Can you upload the image through github comment or google drive like log file before. The images doesn't load correctly
username_0: I'm having this error after the update

And I've lost all my boards

I cant event import because the dialog doesnt show up, the same with the links to Twitch, Twitter, Spotify...
I hope this could be helpful to you to solve the issue.
username_1: Thanks for the info. I will update this immediately.
username_1: v1.7.0-rc2 should fix the issue https://github.com/username_1/deckboard/releases/tag/v1.7.0-rc2
username_0: now the app works, but I have the same problem from the begining.
When I try to connect from my tablet or phone, it tries, but return an error
Uh-oh!
The device can't conncet to your computer. Please try again.
Is there any update from the android app?
username_1: can you resend the log file and the database.db file from
C:/Users/{USER}/deckboard for me to analyze whether the problem is indeed
is connection issue or not?
username_0: Here is the database
https://drive.google.com/open?id=1UfCKpJDLboRhPYHXrU1b9M5pjHKp69KW
where the log was?
username_1: C:\Users{USER}\AppData\Roaming\Deckboard
username_0: https://drive.google.com/open?id=0BwmhKVV-3yWgTTFQRmtZTHc4VlBER1BxV0UyeG5scHpOYjdJ
username_1: Based on the log you sent me, there is no problem with the app or your saved board.
I also test your board on 2 different networks, and it works just fine.
Can you recheck the firewall setting and make sure deckboard.exe is checked under Allowed apps and features. Especially the executable file path must be directed to actual file.

username_0: Yeah! You nailed it!
That was the problem. The firewall rule was pointing to C:\program
files\deckboard\deckboard.exe
instead of c:\users\(user)\appdata\local\programs\deckboard\deckboard.exe
Thank you very much for your effort to solve this issue
--
<NAME>
Twitter: @username_0
Status: Issue closed
username_1: Great to hear. Let me know if there is any problem with the release candidate. :) |
logaretm/vee-validate | 886804824 | Title: Checkbox with `validateOnChange: false` won't update form values when checked
Question:
username_0: **Versions**
- vee-validate: 4.3.6
- vue: 3.0.11
**Describe the bug**
When `validateOnChange` is set to false, clicking on a checkbox won't update the forms values. Is this intended behavior or a bug?
**To reproduce**
Steps to reproduce the behavior:
1. Set VeeValidate configuration to `validateOnChange: false`
2. Click on checkbox with any value
3. Values for example via `v-slot="{values}" won't show any change
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Demo link**
https://codepen.io/username_0/pen/yLMYpGq
Answers:
username_1: Thank you for reporting this, this indeed looks like a bug. Will see what I can do.
Status: Issue closed
|
FStarLang/FStar | 71612388 | Title: Impossible: Forced tk not present
Question:
username_0: ```
module UnexpectedError
type foo : (unit -> Tot bool) -> Type =
| Test : f:(unit -> Tot bool) -> foo f
val bar : foo (fun _ -> true) -> unit
let bar t = match t with
| Test _ -> ()
```
This code fails with
```
Unexpected error; please file a bug report, ideally with a minimized version of the source program that triggered the error.
Impossible: Forced tk not present (bug.fst(6,24-6,28))
```
Answers:
username_1: thanks for the report ... this is related to my most recent commits, obviously. Working on it now
username_1: Yes, indeed. This is an issue at least as far back as e9efcadf4ef041d237d7907db16fa21e01619250
Status: Issue closed
username_1: Just pushed a fix ... but it's not an optimal one. You may not get all the equations you expect on implicitly bound function-typed values. In this case, the implicitly bound function appears in the discriminator `isTest #(fun x -> true) t`
Closing it for now ... but may have to reopen in the future. |
ladislav-zezula/CascLib | 113486250 | Title: can't open overwatch casc
Question:
username_0: can't open overwatch casc
to force install overwatch
battle.net.exe --switcherall --install --game=prometheus --installlocale=enUS --installregion=US --torrentinstall
Answers:
username_1: Installation works so far, I'll look at it.
username_2: @username_1
I just force installed Overwatch
(if you have trouble, revert the battle.net launcher to the previous version just before they patched it)
I can not extract the files.
Tried it with game storage, got this...
https://i.gyazo.com/773f8731dd278acebaff3ace275b5b0a.png
Tried it with open storage, got this...
https://i.gyazo.com/84693e69c4b7e0f085e6b39074afcdeb.png
Please do your best to get it working for Overwatch! I REALLY want to extract the models!
username_1: username_2: Working on it.
username_3: I extract the rootfile, but i found it's a textfile(start with #MD5, i had thought it's the signature )
Here is the rootfile
#MD5|CHUNK_ID|FILENAME|INSTALLPATH
FE3AD8A77EEF77B383DF4929AED816FD|0|RetailClient/GameClientApp.exe|GameClientApp.exe
5EDDEFECA544B6472C5CD52BE63BC02F|0|RetailClient/Overwatch Launcher.exe|Overwatch Launcher.exe
6DE09F0A67F33F874F2DD8E2AA3B7AAC|0|RetailClient/ca-bundle.crt|ca-bundle.crt
99FE9EB6A4BB20209202F8C7884859D9|0|RetailClient/ortp_x64.dll|ortp_x64.dll
........
username_2: @username_1
So, do you have a work-in-progress fix available for download? I'm not good with GitHub, not sure where the exe downloads are.
Sorry, no rush, I just want to do something today.
username_1: There is no EXE download here. The preliminary version of CascView can already extract files, but can't give them names yet.
username_2: @username_1
From overwatch? Also, can't give them names? I think I follow, but I can still get files out, right?
I'm going to try exacting crap again.
username_2: @username_1
Ah, I see, I CAN get the files, but none of them are really identifiable.
username_2: @username_1
Extracting gives me an error, probably due to the naming issues.
Status: Issue closed
username_1: Ok, some basic functionalty is done; however, until I (or someone) don't figire out the name -> encoding key relationship, both CascLib and CascView will only extract files by their MD5. |
crystal-lang/crystal | 495476308 | Title: ICE: Module validation failed: Invalid bitcast
Question:
username_0: Code below:
```cr
class Object
def to_foo(io); end
def to_foo
String.build &->to_foo(IO)
end
end
Time.new.to_foo # for `Reference`s works fine apparently
:foobar.to_foo # ... but it crashes having `Value`s as implicit `self`
```
generates following ICE:
```
Module validation failed: Invalid bitcast
%6 = bitcast i32 %self to i8*
(Exception)
from Crystal::CodeGenVisitor#finish:Nil
from Crystal::Compiler#codegen<Crystal::Program, Crystal::ASTNode+, Array(Crystal::Compiler::Source), String>:(Tuple(Array(Crystal::Compiler::CompilationUnit), Array(String)) | Nil)
from Crystal::Compiler#compile<Array(Crystal::Compiler::Source), String>:Crystal::Compiler::Result
from Crystal::Command#run_command<Bool>:Nil
from Crystal::Command#run:(Bool | Crystal::Compiler::Result | Nil)
from main
```
See https://carc.in/#/r/7le1
Answers:
username_1: Related to #7577: you currently can't get a proc of a primitive (number, symbol, etc.) method |
typings/core | 143924315 | Title: Break `npm test` into two
Question:
username_0: One for development: `npm run build && npm run test-cov`.
One for CI: (current one).
So that test can be run much faster during development.
Answers:
username_0: Or maybe I should just use `npm run test-spec` during development instead? :smile:
username_1: Yes, `test-spec` is normally what I write to provide a lighter weight test script. Normally it doesn't do as much as `test` (which is complete) and is faster because it skips things like code coverage.
Status: Issue closed
username_0: Closing this discussion. Use `test-spec` |
0vercl0k/wtf | 1167882387 | Title: It seems that nt!ExGenRandom's code has changed, update the offset! HEVD
Question:
username_0: HEVD nt!ExGenRandom problem
C:\Users\test\Desktop\IOCTL>..\wtf\wtf.exe fuzz --backend=bochscpu --name hevd --limit 10000000
Initializing the debugger instance.. (this takes a bit of time)
Setting debug register status to zero.
Setting debug register status to zero.
----
Debug Offset:577891a0
Debug Offset:5771c9a0
Debug Address:10406352
----
It seems that nt!ExGenRandom's code has changed, update the offset!
Failed to initialize the target<issue_closed>
Status: Issue closed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.