repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
jerrygiser/CSZLS
102721366
Title: 36.外勤—工单查询—地图分布—提示获取数据失败 Question: username_0: ![p3qa f 1x4 k ihk_ ph](https://cloud.githubusercontent.com/assets/11640202/9434257/72e7894e-4a6f-11e5-8450-7d954b26d112.png) Answers: username_1: 把服务端的报错截图也贴上来! username_0: 之前这里是没有加时间类型过滤,直接填时间查询,所以会报错,点击地图分布没有反应。田楠加了提示信息之后,这里的问题已经解决。 Status: Issue closed
qetza/vsts-replacetokens-task
282731261
Title: Changes in Azure Cloud Services web.config Question: username_0: Hi, I'm trying to apply Replace Tokens to changes in web.config in Azure Cloud Services. It isn't working because the web.config file is inside the compressed .cspkg file created in the Cloud Services build. Is there any way to work around this issue? Regards, Henrique Answers: username_1: Hi Henrique, You can replace tokens in the .cscfg file if you have define your settings there or if you can unzip the cspkg file, replace tokens in your web.config and zip the file again. Status: Issue closed
ucdavis/Anlab
884816947
Title: Admin Duplication does not duplicate everything Question: username_0: INC1214800 Original Subject: Admin Duplication does not duplicate everything Urgency Level: Non-Critical Issue Support Department: Programming Support For Application: Anlab Supplied Message Body: Example: Work Order #2336 was duplicated several times to become work orders #2337 - 2340. 1) Notice that the Client ID is duplicated, but the client name is not. 2) Also notice that "additional emails" drops one email. This could lead to all clients not receiving the proper notifications.
Jermolene/TiddlyWiki5
240459960
Title: New Journal overwrites draft tiddler Question: username_0: Clicking New Journal overwrites the tiddler currently being edited if it has the same title. Even if the draft title has already been changed but it has not been saved yet with the new title. The title field of the draft does not change, but the text field gets set to the default, or cleared if there is no default. Answers: username_1: @username_0: Possible this is fixed?
wspr/breqn
423836935
Title: Font Size Bug with siunitx Question: username_0: There is a problem when using the breqn package with siunitx. Compare the font sizes for ``` \documentclass{article} \usepackage{amsmath} \usepackage{siunitx} \begin{document} \[\frac{1}{\fraq{1}{\SI{1.23}{\volt}}}\] \end{document} ``` and ``` \documentclass{article} \usepackage{amsmath} \usepackage{breqn} \usepackage{siunitx} \begin{document} \[\frac{1}{\fraq{1}{\SI{1.23}{\volt}}}\] \end{document} ``` Answers: username_1: Issue is in `siunitx`, and needs the v3 font code, so not an issue _here_. Status: Issue closed
IBMStreams/streamsx.messaging
418242580
Title: Security issue CVE-2017-12610 Question: username_0: In Apache Kafka 0.10.0.0 to 0.10.2.1 and 0.11.0.0 to 0.11.0.1, authenticated Kafka clients may use impersonation via a manually crafted protocol message with SASL/PLAIN or SASL/SCRAM authentication when using the built-in PLAIN or SCRAM server implementations in Apache Kafka. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-12610<issue_closed> Status: Issue closed
cdierkens/bluebird-cabinetry-and-design
629317020
Title: Display the image number for example "1 of 15" images in the portfolio images light box. Question: username_0: Footer could make sense. My initial thoughts were focused on maximizing the size of images. We could concede that since full screen is available users can see it in a larger view already. I like the idea of overriding styles instead of providing our own Header and Footer component. I went with the custom component approach more for speed than correctness, so there wasn’t a strong reason for using that pattern beside the header’s Custom tagline text. I’d just make sure you double check how it appears on mobile with a long tagline and the numbers as you try approaches. Answers: username_1: Played around with this a bit. Figured out that this is possible just by removing the Footer: null from components={{ Footer: null, Header, View }} in the Carousel in PortfolioImages . Not sure that's exactly what we are after, design wise. Also noticed it's possible to override the default styles. So maybe we put it in the footer, or maybe in the header. Will have to play with it some more. username_0: Footer could make sense. My initial thoughts were focused on maximizing the size of images. We could concede that since full screen is available users can see it in a larger view already. I like the idea of overriding styles instead of providing our own Header and Footer component. I went with the custom component approach more for speed than correctness, so there wasn’t a strong reason for using that pattern beside the header’s Custom tagline text. I’d just make sure you double check how it appears on mobile with a long tagline and the numbers as you try approaches. username_1: Okay, peep the 294-image-number branch. I got the footer text to display, and invert. It's pretty quick and dirty. But, was cool to figure out. It's possible to return a custom styles function. Maybe, that makes sense? username_1: Moved the styles to customSyles object. Not sure how to pass props other than caption to footer caption. Maybe will play around with the Footer Component and have it return both FooterCaption and FooterCount Status: Issue closed
baidu/amis
571495015
Title: datetime 为0 时,可否支持输出- 而不是1970 Question: username_0: datetime 为0 时,可否支持输出- 而不是1970 而我目前只能这么做 "tpl": "<% if (data.update_time == 0) { %>-<% }else{ %> <%= formatDate(data.update_time, format='YYYY-MM-DD HH:mm:ss', inputFormat='X') %> <% } %>", Status: Issue closed Answers: username_1: 目前应该只能这样
snakemake-workflows/chipseq
911811209
Title: snakemake version issue Question: username_0: Hello, I tried to use this workflow and ran into multiple issues, I guess it's related to the snakemake version: If under 5.17.0 ``` WorkflowError in line 37 of /project/umw_ingolf_bach/rui/final/chipseq/workflow/rules/peak_analysis.smk: Extensions for multiext may not contain path delimiters (/,\) and must start with '.' (e.g. .txt). File "/project/umw_ingolf_bach/rui/final/chipseq/workflow/Snakefile", line 13, in <module> File "/project/umw_ingolf_bach/rui/final/chipseq/workflow/rules/peak_analysis.smk", line 37, in <module> ``` if version 6.4.1 ``` MissingInputException in line 163 of /project/umw_ingolf_bach/rui/final/chipseq/workflow/rules/post-analysis.smk: Missing input files for rule phantompeak_correlation: ../workflow/header/spp_corr_header.txt ``` Is this a snakemake version issue? Snakemake seem to have trouble finding '../workflow' in the newer version, and it complained about syntax in the older version. Which snakemake version was this tested on ? Thanks! Ray Status: Issue closed Answers: username_1: Hi Ray, sorry, this whole workflow is still under heavy development as we finish porting all functionality from the [original nextflow workflow](https://nf-co.re/chipseq), which is why there is no release of this workflow, yet. That said, I just merged a very big chunk of work in PR #6 and the newest version also includes automated testing. So the current latest version is dramatically different from 1 hour ago and should work. The snakemake report will just lack a lot of the descriptions for plots and other results, and some final output is still missing. But feel free to try out the newest version (which checks for `snakemake >= 6.4.0`) , just know that the first proper release is still a bit away. Best, David username_1: Also, feel free to open other issues if you find new bugs. From the newest version on, the workflow should work, so we're interesting in hearing about any behaviour that is unexpected! username_0: Hello David, Thanks a lot! The above issue disappeared with the update. I'll just need more time to adjust the configs. Do you think the workflow is ready for use? Do you have an estimation of when it will be ready for use? Thanks! Ray username_1: We're hoping to have something with a useable snakemake report in about a month, including proper descriptions of all the different outputs. And I think a useable report is the most important part to easily access the results. However, if you feel comfortable sifting through the results without those detailed descriptions and the report, you could probably use it as is and cross-check with the [output descriptions of the original nextflow workflow](https://github.com/nf-core/chipseq/blob/master/docs/output.md). And no matter if you use this now or later, don't hesitate to open issues for questions and bugs!
google/mediapipe
774601739
Title: “CameraXPreviewHelper.startCamera” error:java.lang.NoClassDefFoundError: Failed resolution of: Landroidx/camera/core/CameraX$LensFacing; Question: username_0: when I use mediapipe version 0.8.2 ,build hand_tracking_aar for android,I used a demo that someone else can compile APK successfully.but If I replace it with my AAR, it will prompt error,Hope to get help : java.lang.NoClassDefFoundError: **Failed resolution of: Landroidx/camera/core/CameraX$LensFacing;** at com.google.mediapipe.components.**CameraXPreviewHelper.startCamera(CameraXPreviewHelper.java:47)** at com.ruggear.thirdhandtracking.**ThirdMainActivity.startCamera(ThirdMainActivity.java:211)** at com.ruggear.thirdhandtracking.**ThirdMainActivity.onResume(ThirdMainActivity.java:149)**. **some code with this:** protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(getContentViewLayoutResId()); try { applicationInfo = getPackageManager().getApplicationInfo(getPackageName(), PackageManager.GET_META_DATA); } catch (PackageManager.NameNotFoundException e) { Log.e(TAG, "Cannot find application info: " + e); } previewDisplayView = new SurfaceView(this); setupPreviewDisplayView(); // Initialize asset manager so that MediaPipe native libraries can access the app assets, e.g., // binary graphs. AndroidAssetUtil.initializeNativeAssetManager(this); eglManager = new EglManager(null); processor = new FrameProcessor( this, eglManager.getNativeContext(), “hand_tracking_mobile_gpu.binarypb”, “input_video”, “output_video”); processor .getVideoSurfaceOutput() .setFlipY(true);//applicationInfo.metaData.getBoolean("flipFramesVertically", FLIP_FRAMES_VERTICALLY) AndroidPacketCreator packetCreator = processor.getPacketCreator(); Map<String, Packet> inputSidePackets = new HashMap<>(); inputSidePackets.put(INPUT_NUM_HANDS_SIDE_PACKET_NAME, packetCreator.createInt32(NUM_HANDS)); processor.setInputSidePackets(inputSidePackets); processor.addPacketCallback( "hand_presence", (packet) -> { Boolean handPresence = PacketGetter.getBool(packet); if (!handPresence) { Log.d( TAG, "[TS:" + packet.getTimestamp() + "] Hand presence is false, no hands detected."); } }); processor.addPacketCallback( "hand_landmarks", (packet) -> { byte[] landmarksRaw = PacketGetter.getProtoBytes(packet); try { LandmarkProto.NormalizedLandmarkList landmarks = LandmarkProto.NormalizedLandmarkList.parseFrom(landmarksRaw); if (landmarks == null) { Log.d(TAG, "[TS:" + packet.getTimestamp() + "] No hand landmarks."); return; } // Note: If hand_presence is false, these landmarks are useless. Log.d( TAG, "[TS:" + packet.getTimestamp() + "] #Landmarks for hand: " + landmarks.getLandmarkCount()); Log.d(TAG, getLandmarksDebugString(landmarks)); } catch (InvalidProtocolBufferException e) { Log.e(TAG, "出错了Couldn't Exception received - " + e); return; } }); PermissionHelper.checkAndRequestCameraPermissions(this); } Answers: username_1: Hey @username_0 Can you share gradle. May be you used older camerax_version. Mediapipe support cameraX version 1.0.0-beta10 since 0.7.12. But AAR document example use 1.0.0-alpha06 username_0: thanks.I use your config Solved the problem.good luck! username_0: Thank you again, let me feel the enthusiasm of open source. This is the complete dependency that needs to be noted(camera-lifecycle): **implementation "androidx.camera:camera-lifecycle:$camerax_version" def camerax_version = "1.0.0-alpha10" implementation "androidx.camera:camera-core:$camerax_version" implementation "androidx.camera:camera-camera2:$camerax_version" implementation "androidx.camera:camera-lifecycle:$camerax_version"** Status: Issue closed username_2: Hi username_1, do you have any code sample/snippet working with 1.0.0-betaXX username_0: Yes, I have successfully compiled Android applications for edge detection and face detection.If you are experienced in this aspect, please help me solve this problem:https://github.com/google/mediapipe/issues/1450 Humor:where are you from? username_1: Update your project to latest version. It has supported cameraX 1.0.0-beta 10. username_0: Complete gradle: ``` apply plugin: 'com.android.application' android { compileSdkVersion 30 buildToolsVersion "30.0.3" defaultConfig { applicationId "com.ruggear.remotehandtracking" minSdkVersion 21 targetSdkVersion 28 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } compileOptions { sourceCompatibility = 1.8 targetCompatibility = 1.8 } } dependencies { implementation fileTree(dir: "libs", include: ["*.jar","*.aar",]) implementation 'androidx.appcompat:appcompat:1.2.0' implementation 'androidx.constraintlayout:constraintlayout:2.0.4' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test.ext:junit:1.1.2' androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0' // MediaPipe deps implementation 'com.google.flogger:flogger:0.3.1' implementation 'com.google.flogger:flogger-system-backend:0.3.1' implementation 'com.google.code.findbugs:jsr305:3.0.2' implementation 'com.google.guava:guava:27.0.1-android' implementation 'com.google.guava:guava:27.0.1-android' //implementation 'com.google.protobuf:protobuf-lite:3.0.0' implementation 'com.google.protobuf:protobuf-java:3.11.4' def camerax_version = "1.0.0-rc01"//1.0.0-alpha06 1.0.0-alpha10 1.0.0-rc01 implementation "androidx.camera:camera-core:$camerax_version" implementation "androidx.camera:camera-camera2:$camerax_version" ``` implementation "androidx.camera:camera-lifecycle:$camerax_version" }
encode/httpx
579253850
Title: Suppress error "returning true from eof_received() has no effect when using ssl" Question: username_0: I'm using httpx 0.12 w/ python 3.6 behind an SSL-intercepting proxy. verify=False I'm constantly getting error "returning true from eof_received() has no effect when using ssl" while my async function is running. Is there any way to suppress this error? Answers: username_1: Could you include a traceback. It'd be great to have enough information to be able to replicate this - what proxy are you using and what would we need to do to reproduce the issue? username_0: Unfortunately this error is not being caught as an exception. It just prints to the console as I'm running my async function. Sorry for being ignorant, but where should I look to print more information on the error? I think I've nailed this error down to a specific set of hosts I'm making requests against. I'll try to find any common attributes among those hosts. The proxy being used is bluecoat proxySG, which afaik uses squid. I have confirmed that the error also does print when using a squid proxy that does not perform SSL-intercept, otherwise known as "SSL bump". username_0: Ok, I was able to duplicate this off-network with the following setup: squid configured with a basic config: http_port 127.0.0.1:3128 http_access allow all code: `import asyncio import httpx import random timeout = 10 useragent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:72.0) Gecko/20100101 Firefox/72.0' headers = {'User-Agent': useragent} targets = 'path to list of hostnames' livetargets = [] proxies = {'all':'http://127.0.0.1:3128'} with open(targets) as hostslist: hostslist = hostslist.readlines() hostslist = [host.strip() for host in hostslist] hostslist = list(dict.fromkeys(hostslist)) random.shuffle(hostslist) sema = asyncio.BoundedSemaphore(50) async def get_url(host): async with sema, httpx.AsyncClient(http2=True, verify=False, proxies=proxies) as client: try: print(host) https = await client.get('https://'+host+'/', timeout=timeout, headers=headers, allow_redirects=False) if https: url = 'https://'+host+'/' return host,url,https.status_code,https.reason_phrase,https.http_version except: try: http = await client.get('http://'+host+'/', timeout=timeout, headers=headers, allow_redirects=False) if http: url = 'http://'+host+'/' return host,url,http.status_code,http.reason_phrase,http.http_version except: return False def main(): loop = asyncio.get_event_loop() tasks = [get_url(host) for host in hostslist] results = loop.run_until_complete(asyncio.gather(*tasks)) loop.close() for result in results: if result: livetargets.append(result) print(livetargets) print(len(livetargets)) if __name__ == '__main__': main()` username_2: ) username_1: Indeed. One other thing too - it'd be useful to isolate this as asyncio-only (or not). Does the issue replicate if you're using `trio` instead? username_0: @username_2 error: "returning true from eof_received() has no effect when using ssl" Unfortunately this is not caught as an exception so I don't have any details on the module or line number. It just prints to the console while my script is running. This only affects asyncio when a proxy is used. I could not get the error to print when using trio. Please see the following code which should allow you to easily reproduce this. This includes 100 randomly selected hosts from att.com. ```python import asyncio import httpx hostslist = ['ffmi04.dp.att.com', 'vnae02.acss.att.com', 'h-135-197-231-61.research.att.com', 'tmla-demo.att.com', 'h-135-207-51-140.research.att.com', 'extrameasures.att.com', 'almdm02.dp.att.com', 'h-135-207-1-50.research.att.com', 'h-135-207-50-100.research.att.com', 'h-135-207-7-10.research.att.com', 'seasons.casab.hvs.att.com', 'h-135-207-168-11.research.att.com', 'iqiproddis02.iqi.labs.att.com', 'h-135-207-58-41.research.att.com', 'q1eam1w10.stage.att.com', 'vnch0203.acss.att.com', 'b2bsase.att.com', 'recvdna.att.com', 'h-135-207-10-151.research.att.com', 'afmfe3.att.com', 'dss-p1mbw26.att.com', 'labsphere.labs.att.com', 'm.att.com', 'oauthapp2.stage.att.com', 'hvd-intl08.att.com', 'h-135-197-228-131.research.att.com', 'mdmint01.dp.att.com', 'h-135-207-172-1.research.att.com', 'am01.acss.att.com', 'dev-cybersecurityservices.att.com', 'h-135-207-178-210.research.att.com', 'h-135-197-231-60.research.att.com', 'tlws-ssl-ngeag.att.com', 'afmfe2.att.com', 'h-135-207-19-151.research.att.com', 'attreg.indigo.test.att.com', 'saml.e-access.att.com', 'autodiscover.att.com', 'h-135-207-133-80.research.att.com', 'anpw15fep1webext.attpoc1.att.com', 'lsreg.indigo.test.att.com', 'caemp2.vpn.att.com', 'assetmanagement.att.com', 'vnfr09.acss.att.com', 'h-135-207-38-10.research.att.com', 'mpoc.iot.att.com', 'vndh150.acss.att.com', 'h-135-207-52-110.research.att.com', 'h-135-197-240-41.research.att.com', 'h-135-197-246-50.research.att.com', 'seasons.calag.hvs.att.com', 'qvscol05.ciq.labs.att.com', 'h-135-207-35-70.research.att.com', 'h-135-207-49-241.research.att.com', 'h-135-207-31-10.research.att.com', 'opss-p2cgw9.att.com', 'h-135-207-22-21.research.att.com', 'demovnastd.att.com', 'apivdna.att.com', 'av.claritysfb.labs.att.com', 'believeneworleans.att.com', 'myattwx25.att.com', 'ph-connection.dlife.att.com', 'h-135-207-52-70.research.att.com', 'customerapi-workforcemanager.att.com', 'txssl9.vpn.att.com', 'bc-sf.att.com', 'h-135-207-60-120.research.att.com', 'ab02.acss.att.com', 'h-135-207-253-190.research.att.com', 'h-135-207-31-101.research.att.com', 'apsapitest01.att.com', 'h-135-207-53-100.research.att.com', 'h-135-207-168-120.research.att.com', 'apps.firstnet.att.com', 'h-135-207-57-201.research.att.com', 'h-135-207-51-90.research.att.com', 'h-135-207-62-190.research.att.com', 'h-135-207-53-60.research.att.com', 'xdmca.wireless.att.com', 'cnosc.att.com', 'h-135-207-57-211.research.att.com', 'h-135-207-105-210.research.att.com', 'atw-expe-cl2-1.att.com', 'h-135-207-180-61.research.att.com', 'ae03.acss.att.com', 'staging.synaptic.att.com', 'opssp9w1.att.com', 'iiwc.dlife.att.com', 'xoauthaccess-sf.att.com', 'h-135-207-241-121.research.att.com', 'mytime-mobile.test.att.com', 'connect1.uc.att.com', 'csi-systest.test.dlife.att.com', 'h-135-207-98-150.research.att.com', 'h-135-207-178-220.research.att.com', 'edge1.exch.att.com', 'origin-wbfc-lyn2.att.com', 'seicmsdev.test.att.com', 'h-135-207-249-201.research.att.com'] sema = asyncio.BoundedSemaphore(20) async def get_url(host): async with sema, httpx.AsyncClient(proxies=proxies) as client: try: print(host) https = await client.get('https://'+host+'/') if https: url = 'https://'+host+'/' return host,url,https.status_code,https.reason_phrase,https.http_version except Exception as e: print(e) def main(): loop = asyncio.get_event_loop() tasks = [get_url(host) for host in hostslist] results = loop.run_until_complete(asyncio.gather(*tasks)) loop.close() if __name__ == '__main__': main() ``` username_2: Unfortunately I don't have a local proxy running so I was not able to reproduce… Can you share some steps to setup such a proxy? username_0: I'll use debian-based linux distro steps, but it should be similar for any linux distro. apt install squid mv /etc/squid/squid.conf /etc/squid/squid.conf.orig printf 'http_port 127.0.0.1:3128\nhttp_access allow all' > /etc/squid/squid.conf systemctl start squid That should give you a functional squid proxy listening on 127.0.0.1:3128. username_3: returning true from eof_received() has no effect when using ssl i also meet the same problem '''python3 # proxies soft : v2rayN proxies = { "http": "http://127.0.0.1:10809", "https": "http://127.0.0.1:10809", } headers = { 'Referer': 'https://www.pixiv.net/', } image_url = "https://i.pximg.net/img-original/img/2020/04/02/19/20/07/80517949_p0.jpg" response = await client.get(image_url, headers=headers) file_content = response.content # do something # and https://www.pixiv.net/ has many urls likes # it is successful, but sometimes log "returning true from eof_received() has no effect when using ssl" ''' username_4: I had the same problem. I found that it was asyncio/sslproto.py that was throwing the error and just commented the warning out. Linux /usr/lib/<python version>/asyncio/sslproto.py Windows <python versionr>/Lib/asyncio/sslproto.py ```python def eof_received(self): """Called when the other end of the low-level stream is half-closed. If this returns a false value (including None), the transport will close itself. If it returns a true value, closing the transport is up to the protocol. """ try: if self._loop.get_debug(): logger.debug("%r received EOF", self) self._wakeup_waiter(ConnectionResetError) if not self._in_handshake: keep_open = self._app_protocol.eof_received() if keep_open: logger.warning('returning true from eof_received() ' 'has no effect when using ssl') ``` username_3: @username_4 thanks, it also is available ```python3 import logging, asyncio asyncio.log.logger.setLevel(logging.ERROR) # or just asyncio.log.logger.setLevel(40) ``` username_5: I also had this issue and it was crashing my jupyter notebook due to the long outputs (making a lot of requests) I can confirm that adding `asyncio.log.logger.setLevel(logging.ERROR) ` Resolved it. @username_2 would you like me to submit a pull request? I can either add this to the docs somewhere or aim to incorporate it into the library. username_2: @username_5 Thanks for suggesting, but I don't think hiding asyncio warnings is a good solution long term. But feel free to use this as a _workaround_ for now. There may still be something we need to account for to make this error not appear at all. @username_0 Thanks for the pointers about Squid, it's very useful. I'm going to give this a shot using [`docker-squid`](https://github.com/sameersbn/docker-squid), see if I can replicate… Stay tuned! username_2: Okay, here's my setup with `httpx==0.12.1`*: ```properties # squid.conf http_port 3128 http_access allow all ``` ```bash # proxy.sh exec docker run --rm \ --name squid \ --publish 3128:3128 \ --mount type=bind,source="$(pwd)/squid.conf",target=/etc/squid/squid.conf \ sameersbn/squid ``` ```python import asyncio import httpx proxies = "http://localhost:8080" target = "https://www.google.com" num_tasks = 20 num_ok = 0 async def fetch() -> None: global num_ok async with httpx.AsyncClient(proxies=proxies) as client: await client.get(target) num_ok += 1 async def main() -> None: tasks = [fetch() for _ in range(num_tasks)] await asyncio.gather(*tasks) loop = asyncio.get_event_loop() loop.run_until_complete(main()) assert num_ok == num_tasks # Shows that errors aren't actual exceptions. ``` With this, I'm able to reproduce the `returning true from eof_received() has no effect when using ssl` error logs from asyncio. It appears a random number of times (between 0 and `num_tasks`) depending on the run. (Obviously the error doesn't show up when requesting an HTTP server, eg a uvicorn server on localhost.) I'm still not sure what's causing this issue (probably some kind of race condition due to `.gather()`), but at least now we've got a reproducible setup. --- \*On `master` this setup is completely broken, seems like there's a bug in `httpcore`… Any idea @username_1? ```console Traceback (most recent call last): File "client.py", line 13, in fetch r = await client.get(target) File "/Users/florimond/Developer/python-projects/httpx/httpx/_client.py", line 1279, in get [Truncated] File "/Users/florimond/.pyenv/versions/3.8.2/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "client.py", line 20, in main await asyncio.gather(*tasks) File "client.py", line 15, in fetch num_ok += 1 File "/Users/florimond/Developer/python-projects/httpx/httpx/_client.py", line 1454, in __aexit__ await self.aclose() File "/Users/florimond/Developer/python-projects/httpx/httpx/_client.py", line 1443, in aclose await proxy.aclose() File "/Users/florimond/Developer/python-projects/httpx/venv/lib/python3.8/site-packages/httpcore/_async/connection_pool.py", line 275, in aclose await self._remove_from_pool(connection) File "/Users/florimond/Developer/python-projects/httpx/venv/lib/python3.8/site-packages/httpcore/_async/connection_pool.py", line 258, in _remove_from_pool self._connection_semaphore.release() File "/Users/florimond/Developer/python-projects/httpx/venv/lib/python3.8/site-packages/httpcore/_backends/asyncio.py", line 210, in release self.semaphore.release() File "/Users/florimond/.pyenv/versions/3.8.2/lib/python3.8/asyncio/locks.py", line 533, in release raise ValueError('BoundedSemaphore released too many times') ValueError: BoundedSemaphore released too many times ``` username_6: I can confirm there's a bug in httpcore's handling of tunnel proxies, I've opened https://github.com/encode/httpcore/issues/54 to address it. username_1: Closed via https://github.com/encode/httpcore/issues/54 Status: Issue closed username_7: @username_1 Just for fully understanding - I still see the "returning true from eof_received() has no effect when using SSL" logs even on latest httpcore. Should I just change the logging level or should we re-open it? Thanks! username_2: I'm using httpx 0.12 w/ python 3.6 behind an SSL-intercepting proxy. verify=False I'm constantly getting error "returning true from eof_received() has no effect when using ssl" while my async function is running. Is there any way to suppress this error? username_2: @username_7 Yeah, it seems we closed this prematurely. #54 and the subsequent PR #57 were adressing the "unrelated bug" I was referring to earlier, not the original issue here. Lemme reopen. username_2: I've marked this as `external` because I believe this should be treated as a CPython "bug". See: https://github.com/python/cpython/blob/0ef96c2b2a291c9d2d9c0ba42bbc1900a21e65f3/Lib/asyncio/streams.py#L269-L278 The code there uses a `._over_ssl` flag to return `False` in the SSL case, which should prevent the warning from firing, but it does. So maybe that flag is incorrectly set? I don't think there's actually anything we can do at the HTTPX / HTTPCore level. There's a workaround above, and if this is causing too much trouble I'd recommend filing a ticket to CPython and helping solve the issue over there. Status: Issue closed username_2: I'm using httpx 0.12 w/ python 3.6 behind an SSL-intercepting proxy. verify=False I'm constantly getting error "returning true from eof_received() has no effect when using ssl" while my async function is running. Is there any way to suppress this error? username_2: Woops, okay, let me go through this slowly because I think it's actually on us… There are three things at play here: * asyncio's `StreamReaderProtocol`: it's got a `._over_ssl` flag. It's `False` initially, then set to "does the transport have an `sslcontext`?" when `.connection_made()` is called. https://github.com/python/cpython/blob/0ef96c2b2a291c9d2d9c0ba42bbc1900a21e65f3/Lib/asyncio/streams.py#L236 * asyncio's `SSLProtocol`: it _optionally_ calls `.connection_made()` on the protocol it wraps after the handshake completes. * asyncio's `loop.start_tls`: it creates a `SSLProtocol` wrapping the current transport protocol, _but does not instruct it to call `.connection_made()` on the current protocol_ as per `call_connection_made=False` (since I guess asyncio assumes that protocol is "already running"). https://github.com/python/cpython/blob/0ef96c2b2a291c9d2d9c0ba42bbc1900a21e65f3/Lib/asyncio/base_events.py#L1215-L1219 * Our own `.start_tls()` wrapper, which instantiates a `StreamReaderProtocol`, does not call `.connection_made()` on it, then passes that to asyncio's `start_tls()`. So in the end, the `StreamReaderProtocol`'s `.connection_made()` doesn't get called, so `_over_ssl` isn't set, and so the warning is logged… So, how do we fix this? To fix this, I think we need to modify our `.start_tls()` wrapper as follows: ```diff - stream_reader = asyncio.StreamReader() - protocol = asyncio.StreamReaderProtocol(stream_reader) transport = self.stream_writer.transport + protocol = transport.get_protocol() ``` Status: Issue closed username_8: Hi @username_2, thanks for the investigation and the fix. I understand that the fix is not released as long as https://github.com/encode/httpcore/pull/251 is not merged. Do you know when it would be available? username_2: @username_8 Ah, thanks for following up. Indeed I had let the release slip through as I didn't get a review of the changelog there. Anyone can issue release PRs to HTTPCore, so if you're up for it you could either create a new one based on encode/httpcore#251 (if new PRs got merged to HTTPCore in the meantime), or just review the existing one in case it's up to date. :-) When we're ready I'd just need to merge and release from GitHub UI, but the rest can be contributor driven.
NASA-AMMOS/3DTilesRendererJS
939388749
Title: Support newer threejs versions Question: username_0: There's some non-trivial changes in a recent version of threejs (~125->126 I believe?) with the conversion to es modules. Issue to test it out with newer three versions. Answers: username_1: [r127 -> r128](https://github.com/mrdoob/three.js/wiki/Migration-Guide#127--128) might be the version you're thinking of. The examples folder (which we're only using for GLTFLoader) was all changed to use classes and changed to use a bare module specifier on npm. I don't think that should have broken anything but let me know if you find anything! Status: Issue closed username_1: I'm using this successfully with three.js r130 so I'll close this for now. If we see anything specific that breaks with a new release we can make new issues. username_0: I've been using 128 for some time now without any compatibility problems. The log message I saw that prompted me to post this ended up being related to something else in our webpack config - (the joy of loading a second copy of three for a couple days by accident!). Forgot to follow up.
home-assistant/core
1042838093
Title: Tuya AirMaster Air Filter returns Fan Error Question: username_0: ### The problem The Tuya AirMaster Air Filter fan control has 5 levels. While switches are set up properly by the integration, the fan is not. The issue is present in 2021.11.0b5 and previous releases of 2021.11. Device ID 246218102cf432391fb2 Product Category kj ### What version of Home Assistant Core has the issue? Core-2021.11.0bx ### What was the last working version of Home Assistant Core? 2021.10.x ### What type of installation are you running? Home Assistant OS ### Integration causing the issue New Tuya API ### Link to integration documentation on our website https://rc.home-assistant.io/integrations/tuya/ ### Example YAML snippet ```yaml none ``` ### Anything in the logs that might be useful for us? ```txt Logger: homeassistant.components.fan Source: components/tuya/fan.py:165 Integration: Fan (documentation, issues) First occurred: 4:51:09 PM (2 occurrences) Last logged: 4:51:09 PM Error adding entities for domain fan with platform tuya Error while setting up tuya platform for fan Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 382, in async_add_entities await asyncio.gather(*tasks) File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 607, in _async_add_entity await entity.add_to_platform_finish() File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 715, in add_to_platform_finish self.async_write_ha_state() File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 486, in async_write_ha_state self._async_write_ha_state() File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 521, in _async_write_ha_state attr.update(self.state_attributes or {}) File "/usr/src/homeassistant/homeassistant/components/fan/__init__.py", line 617, in state_attributes data[ATTR_SPEED] = self.speed [Truncated] "value": false }, { "code": "light", "value": false }, { "code": "filter_reset", "value": false } ] --------------------------------- switch | Boolean | "{true,false}" speed | Enum | { "range": [ "1", "2", "3", "4", "5" ] } lock | Boolean | "{true,false}" light | Boolean | "{true,false}" filter_reset | Boolean | "{true,false}" Answers: username_0: Issue https://github.com/home-assistant/core/issues/58933#issue-1042155432 seems to be related. Same Applies to issue https://github.com/home-assistant/core/issues/58973#issue-1042924614
poyadav/twitter
565044095
Title: Project Feedback! Question: username_0: <img alt="+1" title="+1" src="/images/emoji/unicode/1f44d.png"}" style="vertical-align:middle" width="20" height="20" /> Nice work! This week, we continued to explore how to build apps that use an API (like Twitter). Unlike the movies app, we created a new class called TwitterAPICaller to help us interact with the API. We're also starting to introduce Auto Layout, which is how you make your app work for different phone sizes. Now that you've finished the app for the week, it's good to reflect on a few things: - Manual segue for the login button. Remember that we couldn't create a segue directly from the login button because we have to check the user's credentials. If they enter the wrong password (or the login fails), you don't want to segue to the next screen. - UserDefaults. We used UserDefaults to keep track of whether the user was logged in or not. If they were already logged in, we went directly to the tweets screen. UserDefaults is a great place to keep track of things you want to save locally, but not save on the server. For example, if you want to show a popup message one time only, you could use UserDefaults to keep track of whether you've shown the popup message already. - TwitterAPICaller. Go back to the project and look through this file that we provided. There are some functions related to authentication that you can ignore. Twitter uses OAuth 1.0a for authentication, which is an old standard. Most new APIs will use something similar to OAuth 2. Other than the authentication functions, the class is pretty simple, and you can create something similar to interact with other APIs. Check out the [assignment grading page](https://courses.codepath.org/snippets/ios_university/grading_spring_19) for a breakdown of how submissions are scored. If you have any technical questions about the project or concepts covered this week, post a question on our [Discussions Forum](https://discussions.codepath.org) and mark the question as type, "Curiosity". For general questions email us at, <<EMAIL>>. Answers: username_0: Looks like your **README is incomplete** for this assignment 😬. The README helps us to make sure we don't miss any required or optional stories you have completed. **Make sure all you have completed the following steps to completing your README:** 1. Make sure you have the correct README for this assignment, go to the "Submitting your App Assignment" section in the Assignment Tab for the corresponding unit in the [course portal](https://courses.codepath.org). 1. Please mark of all completed user stories with an `[x]` 1. Add a link to your animated gif walkthough to your README and make sure it renders (animates) when viewing the README. Once completed, please push your updates and **submit your assignment again so we can regrade it**. Still confused about how to properly submit your assignment? Check out the [Submitting Coursework](https://courses.codepath.org/snippets/ios_university/submitting_coursework.md) for detailed instructions. Whenever you make updates to your project that require re-grading, you need to **re-submit** your project using the submit button on the associated assignment page in the course portal. This will flag your project as “updated” on our end and we know to re-grade. You should re-submit your assignment anytime you: - Update a previously incomplete assignment - Add optional and additional features to an already completed assignment /cc @username_0
ArctosDB/arctos
303567989
Title: flat sex Question: username_0: current autogen attributes code works for all attributes - sex is required in results and cached in flat - use the cache/exclude sex (any others??) from the autogen code for performance boost Answers: username_0: also make sure it remains type attribute (not required) to stay in expected search place Status: Issue closed username_0: rebuild code generator
jadrum/RestaurantManager
344096373
Title: Admin/Manager - Managing current menu Question: username_0: Admins and managers should have the ability to manage the restaurants current menu by adding and removing menu items to a list. This list contains items which customers can currently purchase at that particular time. This will be beneficial to restaurant managers and admins because it will allow them to easily add/remove menu items based on their availability without having to recreate them in the system (meaning, if a menu item is removed, it still exists in the db, just not the active menu). In this section admins and managers can firstly specify a subcategory of the menu (i.e. Pizza, pasta, hamburger, draft beer, etc) which is different than the menu item subcategories (drinks, appetizers, desserts, and entrees). This is designed this way such that different kinds of restaurants can take advantage of this application and make their menu custom to their business model (i.e. bars, nightclubs, sub shops, steakhouses, etc.). Menu categories can be assigned a unique color which will be used on the tile in the order section to differentiate different menu types (i.e. pizza=yellow, pasta=red, salad=green, etc). This will help employees locate menu items faster when managing tabs (a different feature).
nmohanty/NMmailbox
187192391
Title: Project Feedback! Question: username_0: :+1: nice work. A few notes after checking out the code: * Good job with presenting the compose view! 📝 * To get your icons to fade in/out as the user pans, consider using the `convertValue()` method, found in the [Common.swift](https://www.dropbox.com/s/mzfmjlvv863x95e/Common.swift?dl=0) file to map the translation from your pan gesture recognizer to the value of the alpha property for your icon view. * Consider adding functionality that allows the user to swipe from the edge to reveal the menu. You can add all of your views into another view (superview) by selecting them all in the document outline and the choosing [Embed in -> View](http://guides.codepath.com/ios/Creating-Nested-Views#embed-in-view) from the **Editor** drop down menu. You can then add a [screen edge pan gesture recognizer](http://guides.codepath.com/ios/Using-Gesture-Recognizers#1-example-screen-edge-pan-gesture-recognizer) to that view and use the same technique you did for panning the message to slide the view to reveal the menu. Let us know if you have any other thoughts or questions about this assignment. Hopefully by now you feel pretty comfortable using gesture recognizers, animations and view properties and see how they all fit together. We are close now to a turning point in the course where you should be hitting a "critical mass" towards your knowledge of iOS.
iiasa/ixmp
469808860
Title: errors when trying R installation Question: username_0: I got the following errors when trying our R installation instructions (which does not mention the expectation of such errors): ``` $ R CMD build . * checking for file ‘./DESCRIPTION’ ... OK * preparing ‘rixmp’: * checking DESCRIPTION meta-information ... OK Error in loadVignetteBuilder(pkgdir, TRUE) : vignette builder 'knitr' not found Execution halted ``` (fixed by installing `knitr`) then ``` $ R CMD INSTALL *.tar.gz * installing to library ‘/home/username_0/.local/R/library’ ERROR: dependency ‘reticulate’ is not available for package ‘rixmp’ * removing ‘/home/username_0/.local/R/library/rixmp’ ``` (fixed by installing `reticulate`) Answers: username_0: cc @adrivinca and/or @username_1 not sure if these are expected or not. Quite easy to fix, but maybe worth adding a note in the docs? Also, the docs say that `reticulate` will be installed for you in the second step, but I did not find that to be the case. username_1: @username_0, thanks for the check. Indeed, `R CMD install` does not fetch or install dependencies from CRAN. (If we provided a CRAN package (we don't), then fetching and installing rixmp from CRAN would also bring these in.) I think the most robust fix would be adding `Rscript -e "install.packages(c('knitr', 'reticulate'))"` to the instructions. username_0: +1 Status: Issue closed
Munter/subfont
426425574
Title: Identical pages use different font files despite sharing identical subset Question: username_0: When processing multiple identical pages, the resulting font files are different (with a different filename; the `md5` section is identical, but there is an extra `-${i}` suffix to duplicates) despite sharing the same glyph subset. I've set up a [test repository](https://github.com/username_0/subfont-duplicate-issue), with two identical source pages, which should use the same font file after processing (`yarn build`), but `subfont` somehow yields two identical font files, each page referring to each of them. Status: Issue closed Answers: username_1: Thanks for the report, it was really easy to approach because of the repro case :heart: Turned out this was a regression that got introduced in assetgraph/assetgraph#915. I've fixed it for the next release. username_1: Fix released in assetgraph 5.8.2. Since subfont has a dependency on assetgraph ^5.3.1, you should receive the fix if you run `yarn upgrade`.
hudovisk/react-optimize
569916615
Title: Question: Why not have a default variant? ( for testing ) Question: username_0: Hi, first of all, I find this library very useful. I was wondering do you not load a variant by default. The reasoning behind my question is that while testing something within a variant, it is quite tricky to set the variant then, so in `jest` for instance it is hard to assert somethign like a button that is inside the variants, making testing a quite simple library, quite complex. If there was an attribute like `default`, that would simplify things, or maybe you had other use cases in mind when you implemented it. Answers: username_0: OK, I just got this, but to be honest the loader attribute was not very intuitive. :) Anyhow, you may close this issue, sorry for the noise. Status: Issue closed username_1: What is the "loader" attribute?
tinymce/tinymce
810036823
Title: CodeSample plugin removes single linebreak Question: username_0: **What is the current behavior? Describe the bug** In the editor I input the Python code and it works fine as below: ``` from django.contrib.auth import get_user_model list(get_user_model().objects.filter(is_superuser=True).values_list('username', flat=True)) ``` Here I press the Enter key **one time** at the end of first line. However, when I see the output on the HTML page it looks like this: ``` from django.contrib.auth import get_user_modellist(get_user_model().objects.filter(is_superuser=True).values_list('username', flat=True)) ``` The two lines are concatenated. Now if I go back to editor and press the Enter key one more time at the end of first line: ``` from django.contrib.auth import get_user_model list(get_user_model().objects.filter(is_superuser=True).values_list('username', flat=True)) ``` The HTML output is like this: ``` from django.contrib.auth import get_user_model list(get_user_model().objects.filter(is_superuser=True).values_list('username', flat=True)) ``` So this time the lines are not concatenated, but the extra line is showing. **Please provide the steps to reproduce and if possible a minimal demo of the problem via [fiddle.tiny.cloud](https://fiddle.tiny.cloud/) or similar.** **What is the expected behavior?** I'm expecting one Enter key to be enough. **Which versions of TinyMCE, and which browser / OS are affected by this issue? Did this work in previous versions of TinyMCE?** django-tinymce 3.2.0 Windows 10 Pro Microsoft Edge Dev and Firefox Development Answers: username_0: Hi @username_1 So I decide to integrate Tiny Cloud with my Django project but your integration guide here [https://www.tiny.cloud/docs/integrations/django/](url) does not provide complete setup instructions. It takes more than just adding two lines to the ```settings.py``` to get Tiny Cloud work with Django????? username_0: @username_1 Regarding my first issue, the team behind ```django-tinymce``` only provide the interface between Django and your editor. They haven't changed anything in the code. If I had issue with installation and configuration of their module I had to contact them, but this is not the case. This issue is related to the editor and plugin created by Tiny. Now, I just checked the source of output HTML page and noticed that the Enter between first and second line is converted to ```<br>``` not ```<p>```. There should be a configuration option to stop that from happening? username_0: @username_1 Here's what it looks like when I look at the source code (```code``` plugin) of my text: ``` <p>So you have lost your Django's admin/superuser <em>username.</em> Don't worry it's fairly easy to find it.</p> <p>First, make sure the virtual environment is up and running. Then enter the Django shell:</p> <pre class="language-python"><code>python manage.py shell</code></pre> <p>Now run the following commands:</p> <pre class="language-python"><code>from django.contrib.auth import get_user_model list(get_user_model().objects.filter(is_superuser=True).values_list('username', flat=True))</code></pre> <p>You will get a response like this:</p> <pre class="language-python"><code>['Omid']</code></pre> <p>I have just one admin/superuser but if you have more than that, you'll see them all.</p> ``` There is no ```<br>``` or ```<p>``` between two lines of the Python code ```<pre>...</pre>``` but I have pressed the Enter key in the editor (picture below). There should be a ```<br>``` after ```get_user_model```. ![1](https://user-images.githubusercontent.com/31741561/108208813-c57ac300-713e-11eb-920b-4142a45a3678.png) username_1: You said in your initial reply that the issue was with it being rendered. TinyMCE has no control of how the content is rendered **outside** the editor, that's where the integration comes in and hence my suggestion to contact them. Given this can't be reproduced in the fiddle provided and you haven't mentioned it didn't work there, any issues you're having are still likely to be an issue outside of TinyMCE. With regards to your question about the `<br>`, the answer is no it should not be included as it's in a `<pre>` element so it'll render the text verbatim (that includes new line characters). This is how all code samples generally work and are the same on the prism.js site itself and even here on GitHub (see the samples in your description). Additionally, adding a `<br>` in there would actually be unwanted, as then it'd be included when editing the code sample, so TinyMCE is working as expected here. One thing to check is that you aren't using any CSS that would cause a `pre` block to render as regular HTML (eg: `white-space: normal`). username_0: @username_1 Please let me know how to integrate TinyMCE with Django without any third party apps. Thanks. username_2: Integrating TinyMCE with Django, even without third party apps such as the django-tinymce integration, is more of a Django question than a TinyMCE question. All that's necessary from the TinyMCE side of the equation is that you add a script tag or similar with tinymce.min.js to the page, and then call `tinymce.init` in that (or another) script. Adding that to your Django site is a question that might be better suited to stack overflow, or a Django specific forum. Status: Issue closed
pypa/pip
1113198013
Title: Heisenbug when installing with --constraint Question: username_0: - This works fine 99% on all other machines and across OSes. - `intbitset==2.4.1` is available as a pre-built wheel under thirdparty/ alright I am at lost to find a way to diagnose the issue. These kind of issues started randomly with the new resolver. The pip version used is the one bundled in the included https://github.com/pypa/get-virtualenv/raw/20.7.2/public/virtualenv.pyz that I include in my etc/thirdparty for easy bootstrapping ### Expected behavior I would like to be able to trace the installation issue. ### pip version 21.2.3 ### Python version 3.6 ### OS macOS v11.6 ### How to Reproduce ScanCode Toolkit is using a --constraint installation that fails sometimes. For instance on macOS with Python 3.6 - `wget https://github.com/nexB/scancode-toolkit/releases/download/v30.1.0/scancode-toolkit-30.1.0_py36-macos.tar.xz` - `tar -xf scancode-toolkit-30.1.0_py36-macos.tar.xz` - `cd scancode-toolkit-30.1.0` - `./configure` ### Output ```sh-session See above for details ``` ### Code of Conduct - [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/). Answers: username_0: scancode-toolkit 30.1.0 depends on lxml<5.0.0 and >=4.6.3 The user requested (constraint) lxml==4.6.3 username_1: This might be an issue around wheel compatibility, that's manifesting as a dependency resolution issue? username_2: At some point in the process I was getting the same conflict but on a different dependency. username_0: It could well be, but how can I investigate and solve the issue? The error message is not helping me doing a diagnostic. I kinda grok a little the pip internals and I understand it is really hard to provide a proper actionable feedback. But if an experienced person cannot figure things out, then we have a problem to solve because that's IMHO an important breakage in the pip UX and a regression from before the introduction of the resolver in pip. If there is any wheel incompatibility on any wheel, we should return a message saying so such as : ``` There is no available lxml==4.6.3 that can be installed from the list of available lxml==4.6.3 [<here print a list of wheels or sdist with their index or link origin>] from these index and links [<here print a list of indexes and links>] that is compatible with this environment constraints of [here print a list of constraints as markers/extra/] [and available as a pre-built binary wheel | buildable from sources (if there are binary/no-binary)] ``` So, what would you suggest I do to diagnose and resolve this issue here? Can I disable entirely dependency resolution and just force feed pip with a list of package requirements that I know work together? And how can I help fix this pip UX otherwise? username_0: yeah, this makes this a real heisenbug as the errors are not always the same. username_3: I wonder if that's related to this reproducible example: https://github.com/pypa/pip/issues/10391#issuecomment-1010628198 In that case the user is able to get different conflict error messages to display on different runs, I have reproduced their example locally. username_0: I can confirm that https://github.com/pypa/pip/issues/10391#issuecomment-1021408758 looks very similar. username_0: @username_4 I recall you have a mac ... may be you could test this? This would be super gentle of you! :heart: username_4: After a few iterations (of different packages and installing them manually). I finally arrived to `intbitset==2.4.1` which fails to install with the following clang error. <details> <summary> pip install intbitset==2.4.1 </summary> ``` Collecting intbitset==2.4.1 Using cached intbitset-2.4.1.tar.gz (152 kB) Preparing metadata (setup.py) ... done Requirement already satisfied: six in ./venv/lib/python3.10/site-packages (from intbitset==2.4.1) (1.16.0) Building wheels for collected packages: intbitset Building wheel for intbitset (setup.py) ... error ERROR: Command errored out with exit status 1: command: /Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-install-<KEY>intbitset_<KEY>setup.py'"'"'; __file__='"'"'/private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-<KEY>intbitset_<KEY>setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-wheel-9iqqsi8s cwd: /private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-install-2qcvgjva/intbitset_<KEY> Complete output (22 lines): running bdist_wheel running build running build_py creating build creating build/lib.macosx-12-arm64-3.10 copying intbitset/intbitset_helper.py -> build/lib.macosx-12-arm64-3.10 copying intbitset/intbitset_version.py -> build/lib.macosx-12-arm64-3.10 running egg_info warning: no files found matching '*.css' under directory 'docs/_themes' warning: no files found matching '*.css_t' under directory 'docs/_themes' warning: no files found matching '*.conf' under directory 'docs/_themes' warning: no files found matching '*.html' under directory 'docs/_themes' warning: no files found matching 'COPYING' under directory 'docs/_themes' warning: no files found matching 'README' under directory 'docs/_themes' warning: no files found matching '*.html' under directory 'docs/_templates' writing manifest file 'intbitset/intbitset.egg-info/SOURCES.txt' running build_ext creating build/temp.macosx-12-arm64-3.10 creating build/temp.macosx-12-arm64-3.10/intbitset clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -I/Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/include -I/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.10/include/python3.10 -c intbitset/intbitset.c -o build/temp.macosx-12-arm64-3.10/intbitset/intbitset.o -O3 -march=core2 -mtune=native clang: error: the clang compiler does not support '-march=core2' error: command '/usr/bin/clang' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for intbitset Running setup.py clean for intbitset Failed to build intbitset Installing collected packages: intbitset Running setup.py install for intbitset ... error ERROR: Command errored out with exit status 1: command: /Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-install-<KEY>intbitset_<KEY>setup.py'"'"'; __file__='"'"'/private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-install-<KEY>intbitset_f7af9bbeb5ca43<KEY>setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-record-x6ezrxj5/install-record.txt --single-version-externally-managed --compile --install-headers /Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/include/site/python3.10/intbitset cwd: /private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-install-2qcvgjva/intbitset_f7af9bbeb5ca43deb2833c9d5beb39bf/ Complete output (24 lines): running install /Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.macosx-12-arm64-3.10 copying intbitset/intbitset_helper.py -> build/lib.macosx-12-arm64-3.10 copying intbitset/intbitset_version.py -> build/lib.macosx-12-arm64-3.10 running egg_info warning: no files found matching '*.css' under directory 'docs/_themes' warning: no files found matching '*.css_t' under directory 'docs/_themes' warning: no files found matching '*.conf' under directory 'docs/_themes' warning: no files found matching '*.html' under directory 'docs/_themes' warning: no files found matching 'COPYING' under directory 'docs/_themes' warning: no files found matching 'README' under directory 'docs/_themes' warning: no files found matching '*.html' under directory 'docs/_templates' writing manifest file 'intbitset/intbitset.egg-info/SOURCES.txt' running build_ext creating build/temp.macosx-12-arm64-3.10 creating build/temp.macosx-12-arm64-3.10/intbitset clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -I/Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/include -I/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.10/include/python3.10 -c intbitset/intbitset.c -o build/temp.macosx-12-arm64-3.10/intbitset/intbitset.o -O3 -march=core2 -mtune=native clang: error: the clang compiler does not support '-march=core2' error: command '/usr/bin/clang' failed with exit code 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-install-2qcvgjva/intbitset_f7af9bbeb5ca4<KEY>setup.py'"'"'; __file__='"'"'/private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-install-2qcvgjva/intbitset_f7af9bbeb5ca43deb2833c9d5beb39bf/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/75/t8rmj49d3pl1y_rk3vbjzk_r0000gn/T/pip-record-x6ezrxj5/install-record.txt --single-version-externally-managed --compile --install-headers /Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/venv/include/site/python3.10/intbitset Check the logs for full command output. ``` </details> username_0: @username_4 Thank you ++ for helping there! Yes the repeated seemingly random errors is what is typical. Could you try this too? ``` pip uninstall pip intbitset pip install thirdparty/intbitset-2.4.1-cp36-cp36m-macosx_10_9_x86_64.macosx_10_10_x86_64.whl ``` because there is a prebuilt wheel version bundled. username_0: If you with Python 3.9, use instead https://github.com/nexB/scancode-toolkit/releases/download/v30.1.0/scancode-toolkit-30.1.0_py39-macos.tar.xz that are the proper wheels username_4: We still haven't pinned ```py ; python --version Python 3.9.10 ; ./configure ... The conflict is caused by: scancode-toolkit 30.1.0 depends on bitarray<3.0.0 and >=0.8.1 The user requested (constraint) bitarray==2.1.0 ... ; . bin/activate (scancode-toolkit-30.1.0) ; pip install thirdparty/bitarray-2.1.0-cp39-cp39-macosx_10_14_x86_64.whl ERROR: bitarray-2.1.0-cp39-cp39-macosx_10_14_x86_64.whl is not a supported wheel on this platform. WARNING: You are using pip version 21.2.3; however, version 21.3.1 is available. You should consider upgrading via the '/Users/neo/Contrib/test-scancode/scancode-toolkit-30.1.0/bin/python -m pip install --upgrade pip' command. scancode-toolkit-30.1.0) ; pip install bitarray==2.1.0 # installed ``` Same iteration with`markupsafe==2.0.1`. Finally landing on `intbitset==2.4.1` where neither the given wheel in `thirdparty/` works nor the one from mypi. username_0: @username_4 Thank you ++ username_5: It seems you're trying to install `x86_64` wheels on a mac running `arm64`. @username_4, I guess that running `pip debug` will show `universal2` and `arm64` tags only for platform specific tags. Can you confirm ? If the wheels provided in the thirdparty folder are meant to be installed on any mac, they should be `universal2` wheels nowadays. If you want to achieve maximum compatibility (depending on user's pip version), they should even have multiple tags, probably, for `bitarray`, something like `macosx_10_14_x86_64.macosx_11_0_arm64.macosx_11_0_universal2`
dotnet/aspnetcore
623772270
Title: Asp-for and Date Inputs not showing the stored value by default Question: username_0: <!-- More information on our issue management policies can be found here: https://aka.ms/aspnet/issue-policies Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting non-security bugs and feature requests. If you believe you have an issue that affects the SECURITY of the platform, please do NOT create an issue and instead email your issue details to <EMAIL>. Your report may be eligible for our [bug bounty](https://www.microsoft.com/en-us/msrc/bounty-dot-net-core) but ONLY if it is reported through email. For other types of questions, consider using [StackOverflow](https://stackoverflow.com). --> ### Describe the bug When using an asp-for input, if displaying a default value (for example an edit form), by default unlike with strings or numbers, the date is not available and it's blank. To fix this you currently have to overwrite the `value=` attribute to inject the YYYY-MM-DD. Based on how other values are handled, this should be done implicitly. The bug is that instead of YYYY-MM-DD, you get a C# style DateTime string which isn't supported by the HTML5 standard. It needs to be converted before being passed to the user. I noticed that a DateTime input is not included in the MVC unit tests. ``` <input asp-for="Input.DateOfBirth" value="@Model.Input.DateOfBirth.Substring(6,4)[email protected](0,2)[email protected](3,2)" class="form-control" /> ``` ### To Reproduce Create a form using asp-for tag helper that modifies a stored date object. ### Further technical details ``` dotnet --info .NET Core SDK (reflecting any global.json): Version: 3.1.202 Commit: <PASSWORD> Runtime Environment: OS Name: Windows OS Version: 10.0.18363 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\3.1.202\ Host (useful for support): Version: 3.1.4 Commit: <PASSWORD> .NET Core SDKs installed: 3.1.101 [C:\Program Files\dotnet\sdk] 3.1.200 [C:\Program Files\dotnet\sdk] 3.1.202 [C:\Program Files\dotnet\sdk] .NET Core runtimes installed: Microsoft.AspNetCore.All 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] To install additional .NET Core runtimes or SDKs: https://aka.ms/dotnet-download ``` Answers: username_0: Interesting in that the tag helper should be formatting it based on the code at https://github.com/dotnet/aspnetcore/blob/90e89e970877a39cb048bb6f0e59551351f661c3/src/Mvc/Mvc.TagHelpers/src/InputTagHelper.cs#L468 and it's not. That block of code acknowledges the required RFC for filling to value attribute.
AnySoftKeyboard/AnySoftKeyboard
475138907
Title: [Gesture typing] reset word mode on cursor move/positioning Question: username_0: ### Steps to reproduce 1. Gesture type aword 2. Tap in the middle of the typed word to move and position the cursor 3. Preform anything, e.g. tap BS to erase a letter ### Actual behaviour See the word underlined. It brings a lot of weird effects, especially in interference with neighbour words Answers: username_1: @username_2 can you take a look at this? username_2: I'm testing on current master (commit cad0bdb35fe4eeea260bdff1f4a8cb67e3a05914). The only weird effect I see is that the inserted letter is pushed away: 1. The predicted word is committed. 2. A space is inserted. The cursor is put after that space. 3. The letter you tapped is inserted. 4. The cursor moves like normal, and ends up after that letter, ending with (for example) "testing a". It doesn't matter if I use the cursor keys or tap directly in middle of the word. Of course it's still wrong behaviour, but it seems different from what @username_0 is describing. (That's to be expected, as this report is from months ago.) Could somebody verify from current master? username_0: I would be happy if anything was fixed. Then I'd test and report further problems if remain 🤗 (As i already wrote I'm still on Gboard because of the annoying bugs that's why I'm annoying ))) username_1: I see the behavior @username_0 is describing on the latest _master_ 1. Swipe a word (say, "what") 1. Click the middle of the word to move the cursor to after the "a" 1. Type another "a". Expected to see "whaat", but I get “what a". @username_2 do you see a different behavior? @username_0 is that the issue you see? username_0: Exactly. My version is 1.10.606 from https://f-droid.org/app/com.username_1.android.anysoftkeyboard username_2: We're all seeing the same issue, then. I'll look into it. Status: Issue closed username_0: I'm testing it and like it very much! Thanks a lot!
stormpath/stormpath-sdk-node
218058663
Title: Setting status: 'ENABLED' doesn't seem to work Question: username_0: Have enabled Verification Email in the Stormpath web interface. But we have two paths for registering, one path where we want email verification and one path where we don't. When using `application.createAccount` we are passing in the object: ``` { username: 'some username', password: '<PASSWORD>', email: '<EMAIL>', givenName: 'some given name', surname: 'some surname', status: 'ENABLED' } ``` And then the account status in Stormpath is "UNVERIFIED" and we can't automatically log the user in on registration
home-assistant/core
680786862
Title: Thinkingcleaner - Doesn't work because the cloud. Question: username_0: ## The problem Discovey Old Roombas (6XX ) doesn't work because the cloud (https://thinkingsync.com/ ) reports errors. ## Environment - Home Assistant Core release with the issue: - Last working Home Assistant Core release (if known): - Operating environment (OS/Container/Supervised/Core): - Integration causing this issue: pythinkingcleaner - Link to integration documentation on our website: ## Problem-relevant `configuration.yaml` ```yaml - platform: thinkingcleaner ``` ## Traceback/Error logs <!-- If you come across any trace or error logs, please provide them. --> ```txt File "/srv/homeassistant/lib/python3.7/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/srv/homeassistant/lib/python3.7/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/srv/homeassistant/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='thinkingsync.com', port=443): Max retries exceeded with url: /api/v1/discover/devices (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1056)'))) ``` ## Additional information I tried to modify the file: /srv/homeassistant/lib/python3.7/site-packages/pythinkingcleaner/discovery.py as shown here: [https://github.com/home-assistant/core/issues/12822#issuecomment-377970379](https://github.com/home-assistant/core/issues/12822#issuecomment-377970379) but it doesn't work. My Roombas works perfectly so it could be great if was possible to disable the DISCOVERY URL and just insert the Roomba's IP. Thanks. Answers: username_1: I disabled the HA integration, blocked internet access to internet for ThinkingCleaner faceplate, did a reset (>30sec on SPOT-button) and reconfigured (by app). username_0: Please tell me if something works. I disabled HA but I gave up with the app. Please, email to thinkincleaner as well. Everytime It happens I write to them but I am afraid that the certificate of their site is, again, expired. username_1: Just a question, why is the cloud service needed? The API commands can be executed without it (LAN-only), but only unencrypted. Is that the problem for HA? username_0: I have no clue. Giving IP in configurazion.yaml means that LAN Is supported. I think that the problem Is the firmware: why loosing IP Is server Is down? username_0: Cloud is up again. SSL certificate is going to expire the 13th December, so cross your finger.
codelibs/fess
236128818
Title: "charset = unicode" page is not crawled correctly. Question: username_0: I am using FESS Ver 11.1.1. It seems that html pages encoded by "unicode" do not crawl correctly. Is there any solution? problem: The "charset = unicode" page like the sample below is not indexed properly. The html file itself seems to be indexed by crawling, but it seems that the content can not be encoded correctly. Recently, when creating a web page from Microsoft office, it seems that such "unicode" html page is created. I am in trouble because these pages are not searched. Just in case, if it is a unicode page, not only Microsoft's html page but also any page will not be searched. Thanks. Sample: <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"> <head> <meta http-equiv=Content-Type content="text/html; charset=unicode"> <meta name=ProgId content=Word.Document> <meta name=Generator content="Microsoft Word 14"> <meta name=Originator content="Microsoft Word 14"> Answers: username_1: Could you please provide the sample file? username_0: Thank you for your quick response. I attached the 3 sample files. shift_jis, utf-8, and unicode. Only "unicode" page are not searched. [sample.zip](https://github.com/codelibs/fess/files/1079293/sample.zip) username_1: Thank you for the info. Although I do not think "unicode" for meta charset is valid, it will be support as UTF-16LE in the next release. https://stackoverflow.com/questions/20529313/is-charset-unicode-utf-8-utf-16-or-something-else username_0: username_1 san, Thank you for your info. I understood the situation. I am looking forward to the next release. Thanks. Status: Issue closed
CartoDB/observatory-extension
155815131
Title: Get geom by Segment ID Question: username_0: We are going to need functionality like this in a few places, but segmentation is the first one I've encountered it in. Here is the workflow, 1. i have a point dataset 2. i enrich with segmentation data from 55x 3. i filter my data to find a segment i'm particularly interested in 4. i then want the geometries where that segment exists elsewhere cc @username_1 @username_2 Answers: username_1: Totally. Will get to work on this. It should be pretty straight forward! username_2: I would generalize this to any categorical variable in the OBS, shouldn't make a difference in how it's written. username_1: Totally. That's the way I was planning on writing it username_2: 👍
rust-ndarray/ndarray
534371867
Title: Strange behavior when using ndarray in WebAssembly. Question: username_0: Currently I'm working on a project using ndarray in wasm. However I met a bug which is pretty weird. When using ndarray crate, code will produce incorrect result. After debugging, I shrink the reproduce code to a few lines. #### rust lib code: ```rust use std::mem; use std::slice; use ndarray::{Array, Array1}; #[no_mangle] fn alloc(size: usize) -> *mut u8 { let mut homework: Vec<u8> = Vec::with_capacity(size); let ptr: *mut u8 = homework.as_mut_ptr(); mem::forget(homework); ptr } #[no_mangle] fn free(ptr: *mut u8, size: usize) { unsafe { let _: Vec<u8> = Vec::from_raw_parts(ptr, 0, size); } } #[no_mangle] fn draw(ptr: *mut u8, size: usize) { let tmp: Array1<f32> = Array::zeros(20000); let canvas: &mut [u8] = unsafe{slice::from_raw_parts_mut(ptr, size)}; for i in 0..size { canvas[i] = 0x7F as u8; } } ``` #### html code: ```html <!DOCTYPE html> <html> <head> <title>wasm canvas serialize test</title> </head> <body> <canvas id ="main_canvas"> the main canvas</canvas> <script> const canvas_width = 256; const canvas_height = 256; async function init() { let canvas = document.getElementById("main_canvas"); let canvas_context = canvas.getContext("2d"); canvas.width = canvas_width; canvas.height = canvas_height; let wasm_module_stream = await fetch("./canvas.wasm"); let wasm_module = await WebAssembly.instantiateStreaming(wasm_module_stream); let {alloc, free, draw} = wasm_module.instance.exports; let {memory} = wasm_module.instance.exports; let size = canvas_width * canvas_height * 4; let canvas_buffer_ptr = alloc(size); let canvas_buffer_array = new Uint8ClampedArray(memory.buffer, canvas_buffer_ptr, size); [Truncated] canvas_context.putImageData(canvas_image_data, 0, 0); }); console.log("a_frame"); //}, 16); //free(canvas_buffer_ptr, size); } init(); </script> </body> </html> ``` These code allocate canvas buffer from wasm module, and let the module change canvas color, then render the canvas to screen. When this line was removed, browser will render a grey canvas, that's the expected result. But when it's present, canvas color won't change to grey. ```rust let tmp: Array1<f32> = Array::zeros(20000); ``` Hope someone can help. Answers: username_0: Bug accidentally get fixed. I upgrade to windows 10 1909 and Rust 1.39. Status: Issue closed
norman/babosa
54404270
Title: Should Japanese be supported? Question: username_0: on this file http://pastebin.com/CGTHiE58 to get this output http://pastebin.com/aknZyH0Q As you can see it isn't perfect, but for the most part correct and readable. So this raises the question, should Japanese be supported in the first place, seeing how hard and possibly open-ended it is...? Answers: username_1: I'm really not in favor of transliterating Japanese, at least not in this library. As you said, it's a large problem and one probably best left for a library dedicated to that problem exclusively. If you were to create such a library, however, I would be quite happy to make Babosa use it to provide support for Japanese. username_0: First version: https://github.com/username_0/hanami My first time actually bundling a ruby script as a gem, but I tried bundling the gem and installing it, so it should (?) work. See at the end of the readme file for a quick comparison with two other transliteration tools available on the net. Just keep in mind that Japanese transliteration isn't going to be perfect without human intervention... You are probably interested in Hanami#to_ascii, see readme and comments for more details. username_2: @username_0 Cool library :D Japanese transliteration is an open NLP problem. Trying to add it to a gem like this will only produce many bug reports down the road that the maintainers cannot possibly fix. Thus, I say 'no' to adding support for Japanese, at least until some really smart PhD student figures out the perfect transliteration algorithm that is bug-free :) username_3: Just a note. Even if it was possible and easy, I would not use transliterated Japanese in a slug. AFAIK, for Japanese people, Japanese text in roman characters does not make sense and it's really difficult to read. Status: Issue closed
plotly/plotly.js
366900329
Title: ScatterGl line2d error toggling trace mode to/from "lines+markers" Question: username_0: Testing with Plotly 1.41.3. I've got a ScatterGL plot that allows users to turn on and off "lines+markers" mode. This worked in the past, but now I'm getting the error "scene.line2d.destroy is not a function" when making this mode switch and updating the plot (I'm using `react()` to make the update). The error gets thrown from here: https://github.com/plotly/plotly.js/blob/master/src/traces/scattergl/index.js#L316. Using a breakpoint, I'm able to see that the problem is that the value of `scene.line2d` is set to `true` instead of being set to an object reference. So the call to `line2d.destroy()` to blowing up. I'll try to create a repro case, but I was hoping that someone on the Plotly team may just know how/where the `line2d` value could be getting set to `true` instead of set to an object reference. Or, maybe it would be prudent to modify that `destroy()` function to check that the values about to be invoked are actually functions, instead of the loose test for truthiness that is there now? Because obviously Plotly can get into a state where these are set to booleans rather than object references. Answers: username_1: Thanks for writing in. I'm not obvious to me how that can happen. I haven't been able to reproduce this bug https://codepen.io/username_1/pen/ReRNzM username_0: OK, I'll see if I can create a repro case. Thanks. username_1: Maybe related to: https://github.com/plotly/plotly.js/issues/3004 username_0: Yes, that sounds exactly the same. Like the submitter for that issue, I am also having trouble reproducing it in a codepen. I have some charts where it seems to work fine and others where it throws the error, but there's not much different between them other than the data values, so I'm somewhat baffled. I'll keep at it. username_1: We released a fixed https://github.com/plotly/plotly.js/issues/3004 (which seems very much related to this ticket) in 1.42.0, so I'll close this issue. Status: Issue closed
Azure/azure-storage-azcopy
847164791
Title: NFR azcopy sync -delete-only Question: username_0: ### Which version of the AzCopy was used? 10.9.0 ### Which platform are you using? (ex: Windows, Mac, Linux) Linux ### What command did you run? azcopy sync ### What problem was encountered? azcopy sync should support option for delete-only to will allow async copy and delete jobs
screwdriver-cd/screwdriver
438993659
Title: Enable event cache for PRs Question: username_0: Previously, event cache was disabled for PR builds because each PR event only includes 1 build. Since we started support `chainPR`, it makes sense to allow event cache for PRs. Probably need an additional check to see if `chainPR` is on. If it's off, we shouldn't check for event cache so that it doesn't waste time. Related: https://github.com/screwdriver-cd/screwdriver/issues/1257
WheatonCS/Lexos
86385735
Title: Select2 table cache is not updated when values are changed. Question: username_0: Editing a label does not change its position in the sort order, and the new label cannot be searched. A [question was asked about this today](https://datatables.net/forums/discussion/28186/how-to-add-a-row-in-an-editable-table-and-keep-all-the-html-attributes) in the DataTables forum, so we might have some help soon. Answers: username_0: I have found a solution to this issue which seems to work for both sort and search. Finish the success part of the ajax function with this: ```Javascript tr = $target.parent(); table.row(tr).invalidate().draw(); ``` The first line isn't really necessary but gets the table row into a readable variable. The second line invalidates the cached value for the row and then redraws the table. The explanation is [here](https://datatables.net/reference/api/row%28%29.invalidate%28%29). Note that this page hints that `row().invalidate()` might not be the best solution. We may be able to use `cell().data()`. It's worth taking a look before implementing and closing this issue. username_0: This problem was solved, but it looks like the issue was never closed. Status: Issue closed
aws-amplify/amplify-js
558334219
Title: Amplify OAuth documentation not updated with SignInWithApple Question: username_0: **Describe the bug** If you look at Amplify authorization documentation and tutorials there is any information about SignInWithApple but the feature is implemented **To Reproduce** 1. Go to https://aws-amplify.github.io/docs/js/authentication#oauth-and-federation-overview 2. See there is not instruction about SignInWithApple 3. Go to https://aws-amplify.github.io/amplify-js/api/enums/cognitohosteduiidentityprovider.html 4. See that does not show that one provider options is 'SignInWithApple' although this option is a possible parameter in federatedSignIn method **Expected behavior** See instructions/information about the use of SignInWithApple option in AWS Amplify. **Additional context** SignInWithApple it is a important feature to be explained. Apple requires that new social logins contain Sign In With Apple option in order to approve new apps. It is good that AWS Amplify already support this feature but I didn't see any place AWS Amplify documentation talking about it.
jordwalke/esy-pesy-starter
385335560
Title: "esy-peasy" still present in some places Question: username_0: - The github project description says "Starter example using esy-peasy" - graphic at https://github.com/username_1/pesy/blob/master/README.md says "esy-peasy-starter" - readme at https://github.com/username_1/pesy/blob/master/README.md instructs to "git clone <EMAIL>:username_1/esy-peasy-starter.git" Answers: username_1: Fixed the project description.
void-linux/void-packages
676356613
Title: [Package Request] news_flash_gtk as FeedReader (predecessor) is deprecated Question: username_0: I tried to prepare a template for this but didn't figure out how to get it working... so I opened this issue instead of a proper PR. Package: https://gitlab.com/news-flash/news_flash_gtk My attempt at a template: ```sh # Template file for news-flash-gtk pkgname=news_flash_gtk wrksrc="${pkgname}-1.0-rc1" version=1.0+rc1 revision=1 build_style=meson short_desc="A modern gtk feed-reader written in rust" hostmakedepends="rust" makedepends="gtk+-devel webkit2gtk-devel libhandy-devel sqlite-devel" license="GPL-3.0-or-later" maintainer="*******************************" homepage="https://gitlab.com/news-flash/news_flash_gtk" distfiles="${homepage}/-/archive/1.0-rc1/${pkgname}-1.0-rc1.tar.gz" checksum="9c5d65699c9aba582e0c7b453d28e5bf430f449b4e7369958970947164f99ccb" ``` I hope this proves to be useful (even though it is not working). Maybe I can learn a thing or two once this is packaged :smile: Answers: username_1: It's probably best if you explain what went wrong in the template as well. username_0: ``` news_flash_gtk-1.0+rc1_1: do_configure: '${meson_cmd} --prefix=/usr --libdir=/usr/lib --libexecdir=/usr/libexec --bindir=/usr/bin --sbindir=/usr/bin --includedir=/usr/include --datadir=/usr/share --mandir=/usr/share/man --infodir=/usr/share/info --localedir=/usr/share/locale --sysconfdir=/etc --localstatedir=/var --sharedstatedir=/var/lib --buildtype=plain --auto-features=enabled --wrap-mode=nodownload -Db_lto=true -Db_ndebug=true -Db_staticpic=true ${configure_args} . ${meson_builddir}' exited with 1 => ERROR: in do_configure() at common/build-style/meson.sh:96 ``` This is the error output I got (I don't get it at all). But I still tried to define do_configure() so that similar to the build instructions on the GitLab page but got a similar error. Here are the build instructions: ```sh meson --prefix=/usr build ninja -C build sudo ninja -C build install ``` Meanwhile I updated the template to be easier to edit (added a variable for substitution) See template ^ username_1: There must have been some output before that explaining the actual error. Can you show that? username_0: Hold up I probably missing ninja ... right? username_1: No, build style meson implies ninja username_0: Rust linker for the host machine: rustc ld.bfd 2.34 ``` Host machine cpu family: x86_64 Host machine cpu: x86_64 Using 'PKG_CONFIG' from environment with value: 'pkg-config' Did not find pkg-config by name 'pkg-config' Found Pkg-config: NO Did not find CMake 'cmake' Found CMake: NO Run-time dependency glib-2.0 found: NO meson.build:11:0: ERROR: Pkg-config binary for machine MachineChoice.HOST not found. Giving up. ``` Ok that makes things a lot clearer. username_1: Add `pkg-config` to `hostmakedepends` and you might need `glib-devel` in `makedepends`. username_0: already did it: Next issue: ```+ cargo build --release --manifest-path=../Cargo.toml Updating crates.io index Updating git repository `http://gitlab.com/news-flash/news_flash_base.git` error: failed to get `news-flash` as a dependency of package `news_flash_gtk v0.0.0 (/builddir/news_flash_gtk-1.0-rc1)` Caused by: failed to load source for dependency `news-flash` Caused by: Unable to update http://gitlab.com/news-flash/news_flash_base.git?branch=1-0-0-rc-1 Caused by: failed to find branch `1-0-0-rc-1` Caused by: cannot locate local branch '1-0-0-rc-1'; class=Reference (4); code=NotFound (-3) FAILED: src/com.gitlab.newsflash /bin/bash /builddir/news_flash_gtk-1.0-rc1/build-aux/cargo.sh .. src/com.gitlab.newsflash /builddir/news_flash_gtk-1.0-rc1/build '' ninja: build stopped: subcommand failed. ``` it apparently also needed `cargo` and `gettext` username_1: Add `build_helper=rust` and remove `rust` and `cargo` from `hostmakedepends` username_0: I forgot to mention that it requires openssl. However I built it before without xbps-src and it worked fine. I really sorry if that caused any inconvinece. here are the actual required dependencies: devel of `gtk, webkit2gtk, libhandy, sqlite3, gettext and openssl` (replaced openssl-devel with libressl-devel) I can not believe I missed some of these while writing the template. Never hurts to look twice. I updated the template above so you can keep track of the changes I made. New error: ``` + cargo build --release --manifest-path=../Cargo.toml /builddir/news_flash_gtk-1.0-rc1/build-aux/cargo.sh: line 24: cargo: command not found FAILED: src/com.gitlab.newsflash /bin/bash /builddir/news_flash_gtk-1.0-rc1/build-aux/cargo.sh .. src/com.gitlab.newsflash /builddir/news_flash_gtk-1.0-rc1/build '' ninja: build stopped: subcommand failed. => ERROR: news_flash_gtk-1.0+rc1_1: do_build: '${make_cmd} -C ${meson_builddir} ${makejobs} ${make_build_args} ${make_build_target}' exited with 1 => ERROR: in do_build() at common/build-style/meson.sh:122 ``` username_1: Damn, I was very mistaken, sorry. Keep the build helper, but do like the `fractal` template does for `rust`, `cargo` and `rust-std`. username_1: Also, at this point, you should just open a PR. Make sure to follow the commit naming in `CONTRIBUTING.md`, and go ahead. username_0: I'll do that tomorrow. But thank you for your help! Hopefully I'll be able to do it by myself soon. Status: Issue closed username_2: newsflash is now packaged: 0045ad74e93b765a88a9496f795122c6a1ad8598
netlify/build
550513701
Title: [Tooling] Generating new build plugins Question: username_0: Placeholder issue for figuring out a nice user flow for generating new build plugins from a template or via an interactive CLI Answers: username_1: Started at https://github.com/netlify/build-plugin-template username_2: Thanks @username_1. Treating the starter as an initial resolution for this. Status: Issue closed username_1: The starter is only half-done. When users initialize the repository, it does not currently work. Few small things need to be fixed there. username_2: Placeholder issue for figuring out a nice user flow for generating new build plugins from a template or via an interactive CLI Related: https://github.com/netlify/build/issues/238 username_1: I gave it a quick update, it's now ready to be used :) Status: Issue closed
type-challenges/type-challenges
911189406
Title: 2 - Get Return Type Question: username_0: <!-- Notes: 🎉 Congrats on solving the challenge and we are happy to see you'd like to share your solutions! However, due to the increasing number of users, the issue pool would be filled by answers very quickly. Before you submit your solutions, please kindly search for similar solutions that may already be posted. You can "thumb up" on them or leave your comments on that issue. If you think you have a different solution, do not hesitate to create the issue and share it with others. Sharing some ideas or thoughts about how to solve this problem is greatly welcome! Thanks! --> ```ts type MyReturnType<T> = T extends (...args: any[]) => (infer R) ? R : never; ```
IBMStreams/streamsx.messaging
99001729
Title: KafkaConsumer gets an NPE processing received msgs that lack a key Question: username_0: If a received msg lacks a key, msg.key() returns null, e.g., a message created by bin/kafka-console-producer.sh ... --topic mytopic, the consumer gets an NPE when processing the received msg... trying to create a String from a null value. See the stacktrace from the kafka sample below. A fix would be to make AttributeHelper.setValue create an empty String for a null key. You can recreate the problem by starting the kafka sample and then generating additional msgs using the kafka tool above. 04 Aug 2015 11:14:23.105 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:com.ibm.streamsx.messaging.kafka.KafkaClient$KafkaConsumer.run:-1] - Topic[mytopic] Thread[0] Could not send message 04 Aug 2015 11:14:23.105 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - java.lang.NullPointerException 04 Aug 2015 11:14:23.105 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - java.lang.String.<init>(String.java:2178) 04 Aug 2015 11:14:23.106 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - com.ibm.streamsx.messaging.kafka.AttributeHelper.setValue(AttributeHelper.java:81) 04 Aug 2015 11:14:23.106 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - com.ibm.streamsx.messaging.kafka.KafkaClient.newMessage(KafkaClient.java:145) 04 Aug 2015 11:14:23.106 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - com.ibm.streamsx.messaging.kafka.KafkaClient.access$000(KafkaClient.java:30) 04 Aug 2015 11:14:23.106 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - com.ibm.streamsx.messaging.kafka.KafkaClient$KafkaConsumer.run(KafkaClient.java:176) 04 Aug 2015 11:14:23.107 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - java.lang.Thread.run(Thread.java:809) 04 Aug 2015 11:14:23.107 [25697] ERROR #splapptrc,J[52],P[107],KafkaStream2 M[?:?:0] - com.ibm.streams.operator.internal.runtime.OperatorThreadFactory$2.run(OperatorThreadFactory.java:137) Answers: username_1: This item as been fixed. Status: Issue closed
geocollections/geokirjandus
735524164
Title: Remember list/table selection Question: username_0: If user selects and prefers table view over list view, this choice should be remembered in localstorage, so that when returning from detail page to query results, the previously used view style should be active. Currently it switches back to list view. Label "List" in Estonian should be changed to "Nimekiri".<issue_closed> Status: Issue closed
Azure/api-management-developer-portal
879143401
Title: Risk of data loss: Page content is not completely backed up and not completely published anymore!!!!!! Question: username_0: ## Bug description Today, I published my portal contents to create review PDFs and realized that the contents of two pages did not appear in the puplished mode - even after clearing the cache!!! I was able to navigate to the pages and to see the template parts such as the header and the navigation tree on the left. But all page-specific contents had been empty!!! The contents of these two corrupted are only visible in editor/designer mode of my source portal environment where I do the technical writing. Thus, I did a backup of the portal contents and checked whether the content of the two affected pages was correctly backed-up. BUT, the page content was not reflected in the backed up data.json!!! I am afraid about data loss now!!! And I am currently not able to review or deploy he portal contents. And I don´t know if I should continue updating the other pages as long as I do not know if this issue can be fixed. I did nearly every days a back up of the portal for such cases, but the last valid backup where the page content is correctly reflected was **on April 24th. On April 25th,** I did no backup. And as of April 26th, the page content and all my updates are not reflecting anymore in my backup files!! Since this time, I did not realized that there was already an issue in the published mode. So I wasn´t aware and continued working on the corrupted and other pages. I have now clue what happened on April 24th till 25th that removed the page contents from the data.json. ## Reproduction steps 1. Publish the portal 2. Navigate to the affected pages. The page navigation (template) is visible, but the page contents are empty. 3. Do a backup of the portal contents 4. Search for the title of the affected page. There is a page entry in the data.json, but there the referenced document content is not part of the downloaded data.json To fix the issue within the portal, I tried the following: 1) I created copies of the pages and checked how the copied pages behave after publishing and backup. Same issue occurred for copied pages. Same behavior. 2) Tried to copy the sections from the corrupted pages and paste it on a new page. BUT the copied sections/blocks are not reflected in the "Saved" tab of the "Add section" library. And at the moment, I saved sections/blocks to the library all my previously saved sections/blocks disappeared from the "Saved" tab of the "Add section" library!!! So I am also not able to continue using my building blocks for creating and updating other pages!! ## Expected behavior 1) The contents for the two affected pages does not only appear in the editor/designer mode, but also in the published mode and in the backups. 2) If there is a problem with page contents in the backend of the portal, the portal in editor mode throws out a warning message and explains what to do. Thus, the portal backend validates if for each page item also a document item exists. If not is able to correct the issue from the backend. 3) Saving blocks/sections to the library must always work in a reliable way - even in such cases to be able to safeguard content of corrupted pages. ## Is your portal managed or self-hosted? Managed / Self-hosted ## Additional context **@Alex/Mike:** I will send you the data.json with some details about the page IDs and the name of my source environments per email. Could you fix the issue till Monday? I urgently need to prepare a content review and I am not able to create scrollable screenshots or PDFs from pages displayed in editor/designer mode! Answers: username_1: Closing, since we're following up offline. Status: Issue closed username_0: The above described issue happened again!!!! Created new page with contents, but document content (contentTypes/document/) is not reflecting in the backed up data.json! Did not receive any error message during upload. Therefore, did not recognize for days that the content is not backed up. Even when creating new pages on the affected instance, no document content (contentTypes/document/) is reflecting in the backed up data.json for the newly created test page. Uploaded the inconsistent data.json to another instance to test if document content is reflecting in the data.json for new pages created after the upload on the new instance. Yes, here, for the new pages, after downloading the data.json, the new page content is reflecting in the data.json. Believe me that I carefully saved the contents in the GUI hundred times (tried to copy it) and backed up the data. BUT NOTHING helped. Even saving the Sections/blocks is not possible anymore if this error occurs!!!! The saved block entries are only reflecting in the data.json and not in the GUI. But even in the data.json, there is no content available for the saved block entries!!!! ---------- To continue my work on the affected environment, I performed the following steps to fix the issues: 1) Created a PDF as a backup for the texts of the missing page contents in the data.json 2) Executed a reset of the affected instance where the content of newly added pages is not reflecting in the data.json anymore. 3) Fixed the inconsistency in the data.json by adding dummy contents for the page entry in the data.json. 4) Uploaded the data.json with the dummy contents for the missing page contents. 5) Reworked the whole page contents according to the saved backup.PDF including styles and hyperlinks!!!!!!!!!!! 6) Continue authoring content and adding new page contents from the GUI of the affected instance is possible again. Why do I need to do such kind of useless work - that kind of WASTE????????????????. Please fix the issue in your application as soon as possible. As you haven´t provided a fix for this issue, we have to create a script to validate the data.json after each backup to recognize the issue earlier. Again, a lot of wasted time for nothing than managing the "managed" (!!!) infrastructure. Sorry, but this is not how I expect that a "managed" approach works. I need to be able to focus on creating new contents and not to troubleshoot and workaround every one or two weeks. This feels like "green banana" and putting the burden to the user - even I know it is hard to find the root cause. It suddenly occurs, And it seem to occur suddenly when creating new pages and pasting saved blocks/sections.
dhh1128/intent
61550964
Title: need a way to line wrap in the middle of a word, not on space Question: username_0: Suppose I am in the middle of a comment (or some other kind of statement), and I want to insert a hyperlink. The hyperlink may have no spaces in it, yet an IDE must wrap. The normal line wrapping mechanism (LF + <same indent> + "... ") is a proxy for a space. But in this case, we need a wrap that does *not* imply a space, so that unwrapping gives back what we started with. Perhaps LF + <same indent> + "...." (4 dots) instead of 3?
node-opcua/node-opcua
99229168
Title: server.engine.addFolder(); Question: username_0: Hi everyone, I'm trying to get the server tutorial (http://node-opcua.github.io/create_a_server.html) running but keep getting following error: server.engine.addFolder("RootFolder",{ browseName: "MyDevice"}); TypeError: undefined is not a function Any ideas? Answers: username_0: Found the problem it should be: server.engine.createFolder("RootFolder",{ browseName: "MyDevice"}); instead of: server.engine.addFolder("RootFolder",{ browseName: "MyDevice"}); Maybe this should be changed in the tutorials. username_1: In fact the website has been recently updated to reflect recent changes in 0.48 that were not yet published by the time you found the problem. version 0.48 has a few breaking changes with earlier version. please update your node-opcua package version to the latest or use the master version. Status: Issue closed
khyateh/lumstic-web
155866976
Title: Newly created survey behaves as a finalised survey Question: username_0: Steps: 1. Log into survey web ( https://vast-savannah-30841.herokuapp.com/surveys ) as <EMAIL> 2. Create a new survey, add Survey name and description, save survey 3. Add a question of any type Result: System displays a message box "Are you sure you want to add a question to a finalised survey" Clicking ok allows the question to be created. This message is displayed for all questions created Status: Issue closed Answers: username_1: Resolved
coblox/bobtimus
473134304
Title: Refreshing bitcoin UTXO sometimes times out and we receive a ESOCKETTIMEOUT exception Question: username_0: We should investigate why updating the UTXO set times out randomly. `#033[39mFailed to refresh UTXO: ESOCKETTIMEDOUT" ` related to these lines: https://github.com/coblox/bobtimus/blob/af6ff678f9555561e86a9540ae8bd4dcf9d067c1/src/index.ts#L23-L30 and https://github.com/coblox/bobtimus/blob/af6ff678f9555561e86a9540ae8bd4dcf9d067c1/src/wallets/bitcoin.ts#L150-L177 Answers: username_0: This was wrong, we can indeed change the timeout for the client. ## Proposed solution: Just update the timeout to a higher one, default is 30 seconds. username_1: You found the mentioned option, it's the `status`. Status: Issue closed
chicken-sloths/bangazon-api-sprint1
307683987
Title: Change payment type data Question: username_0: ### Context Right now our Payment Types data includes things like payment, withdrawal, etc. We were thinking of creating our own json file (instead of using faker data) to say things like visa, mastercard, etc. ### Process 1. Delete the payment Types faker data and payment types json files 2. Create your own payment types json file 3. Maybe put it somewhere outside ``data/json``? So we can keep deleting that whole ``json`` folder if we want to re-build our data? Just a thought, we should talk about it.
yagi2/dotfiles
994345340
Title: .gitconfig no Question: username_0: `fr = "!$SHELL -c 'git diff --name-only | peco -prompt \"File>\" | xargs git checkout' __dummy__" # file reset` pecoのオプションにハイフン1つ足りない Answers: username_0: `fr = "!$SHELL -c 'git diff --name-only | peco -prompt \"File>\" | xargs git checkout' __dummy__" # file reset` pecoのオプションにハイフン1つ足りない username_1: https://github.com/username_1/dotfiles/commit/9ff3004e37c0d0a8dab79d51519071bdc805cd5f DONE Status: Issue closed
JuliaLang/julia
263723616
Title: build failure on RaspberryPi Question: username_0: At 47fbcc9. My previous successful build was at d0430a4. ``` raspberrypi% make CC src/llvm-ptls.o /home/sachs/src/julia-master/src/llvm-ptls.cpp: In function ‘llvm::Instruction* {anonymous}::emit_ptls_tp(llvm::LLVMContext&, llvm::Value*, llvm::Type*, llvm::Instruction*)’: /home/sachs/src/julia-master/src/llvm-ptls.cpp:145:65: error: ‘asm_str’ was not declared in this scope auto tp = InlineAsm::get(FunctionType::get(T_pint8, false), asm_str, "=r", false); ^ Makefile:140: recipe for target 'llvm-ptls.o' failed make[1]: *** [llvm-ptls.o] Error 1 Makefile:97: recipe for target 'julia-src-release' failed make: *** [julia-src-release] Error 2 ``` Answers: username_1: Are you on stretch? username_0: No, is Raspbian Stretch now required? username_1: Not really, but it is certainly better and the direction we are moving in. Even the compiler in stretch does not build LLVM correctly - but it does build. The issue is with stacktraces. username_2: That's totally unrelated..... Will fix later.... Status: Issue closed
jiaqi/jmxterm
295914627
Title: Does not build or run with java9 Question: username_0: Tried with Java9.0.4 Build time error: [INFO] Running org.cyclopsgroup.jmxterm.pm.JConsoleClassLoaderFactoryTest [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.049 s <<< FAILURE! - in org.cyclopsgroup.jmxterm.pm.JConsoleClassLoaderFactoryTest [ERROR] testLoad(org.cyclopsgroup.jmxterm.pm.JConsoleClassLoaderFactoryTest) Time elapsed: 0.012 s <<< ERROR! java.lang.RuntimeException: Operation requires JDK instead of JRE at org.cyclopsgroup.jmxterm.pm.JConsoleClassLoaderFactoryTest.testLoad(JConsoleClassLoaderFactoryTest.java:22) Runtime error: :~/Downloads$ java -jar jmxterm-1.0.0-uber.jar java.lang.NullPointerException at org.codehaus.classworlds.Launcher.getEnhancedMainMethod(Launcher.java:195) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:294) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at org.codehaus.classworlds.uberjar.boot.Bootstrapper.bootstrap(Bootstrapper.java:209) at org.codehaus.classworlds.uberjar.boot.Bootstrapper.main(Bootstrapper.java:116) Answers: username_1: Same behavior on 10.0.1: ```$ java -version java version "10.0.1" 2018-04-17 Java(TM) SE Runtime Environment 18.3 (build 10.0.1+10) Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10.0.1+10, mixed mode)``` username_2: And in 12: ``` [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test (default-test) on project jmxterm: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test failed.: NullPointerException -> [Help 1] ... bash-4.2# java -version openjdk version "12.0.2" 2019-07-16 OpenJDK Runtime Environment (build 12.0.2+10) OpenJDK 64-Bit Server VM (build 12.0.2+10, mixed mode, sharing) username_3: After making a local build with upgraded dependencies, setting "source" and "target" of maven-compiler-plugin in the POM to "13", building on AdoptOpenJDK 13 works fine, and running it under the same JDK 13 works and the console can be used, however when trying to open a process I get the following error which I cannot resolve: ```` Listing available beans... WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.cyclopsgroup.jmxterm.utils.WeakCastUtils$2 (file:/opt/livware/bigconv/scripts/jmxterm-1.0.1-uber.jar) to method sun.tools.jconsole.LocalVirtualMachine.getAllVirtualMachines() WARNING: Please consider reporting this to the maintainers of org.cyclopsgroup.jmxterm.utils.WeakCastUtils$2 WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Exception in thread "main" java.lang.reflect.UndeclaredThrowableException at com.sun.proxy.$Proxy7.startManagementAgent(Unknown Source) at org.cyclopsgroup.jmxterm.jdk6.Jdk6JavaProcess.startManagementAgent(Jdk6JavaProcess.java:39) at org.cyclopsgroup.jmxterm.SyntaxUtils.getUrl(SyntaxUtils.java:51) at org.cyclopsgroup.jmxterm.boot.CliMain.execute(CliMain.java:127) at org.cyclopsgroup.jmxterm.boot.CliMain.main(CliMain.java:41) Caused by: com.sun.tools.attach.AttachOperationFailedException: java.lang.InternalError: Unable to detect the run-time image at jdk.attach/sun.tools.attach.VirtualMachineImpl.execute(VirtualMachineImpl.java:224) at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.executeCommand(HotSpotVirtualMachine.java:309) at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.executeJCmd(HotSpotVirtualMachine.java:291) at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.startLocalManagementAgent(HotSpotVirtualMachine.java:250) at jdk.jconsole/sun.tools.jconsole.LocalVirtualMachine.loadManagementAgent(LocalVirtualMachine.java:237) at jdk.jconsole/sun.tools.jconsole.LocalVirtualMachine.startManagementAgent(LocalVirtualMachine.java:96) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.cyclopsgroup.jmxterm.utils.WeakCastUtils$1.invoke(WeakCastUtils.java:53) ... 5 more ````
goharbor/harbor
642323062
Title: POST /api/ldap/ping failed with error: {"code":500,"message":"Internal Server Error"} Question: username_0: What can we help you? My habor structures nginx:443 -> harbor1 host (192.168.200.4:80) / harbor2 host (192.168.210.13:80) external db , redis, data storage(oss) i use browser access http://192.168.200.4/harbor/configs auth set LDAP Search Password , save , test ldap server is ok /var/log/harbor/core.log not error but i use browser access http://192.168.210.13/harbor/configs auth test ldap server , not ok, I find these error in harbor2 (192.168.210.13) tail -f /var/log/harbor/core.log [ERROR] [/common/api/base.go:233]: LDAP connect fail, error: LDAP Result Code 49 "Invalid Credentials": [ERROR] [/common/api/base.go:73]: POST /api/ldap/ping failed with error: {"code":500,"message":"Internal Server Error"} so i use browser access http://192.168.210.13/harbor/configs auth set LDAP Search Password , save , test ldap server is ok /var/log/harbor/core.log not error but i use browser access http://192.168.210.13/harbor/configs auth test ldap server , not ok, same error in harbor1 (192.168.200.4) [ERROR] [/common/api/base.go:233]: LDAP connect fail, error: LDAP Result Code 49 "Invalid Credentials": [ERROR] [/common/api/base.go:73]: POST /api/ldap/ping failed with error: {"code":500,"message":"Internal Server Error"} How to solve this? Status: Issue closed Answers: username_1: 您好,你的这个问题解决了么?
fullcalendar/fullcalendar
677612554
Title: Customizing the current date Question: username_0: ![Annotation 2020-08-12 161730](https://user-images.githubusercontent.com/23297726/90009854-79952000-dcbc-11ea-8d8b-4d84c60cb15b.png) ### Reduced Test Case not reproducible with a link ### Bug Description i am looking the show the current date in the month view like in the picture above it should be shown in all the three views like month, week and day. could any one suggest how we can do that? ### Screenshots If the bug is visual, include a screenshot if possible. If not, erase this section. Answers: username_1: If this is a question, please refer to the [support page](https://fullcalendar.io/support/) and use [Stack Overflow](https://stackoverflow.com/questions/tagged/fullcalendar) for help. If this is a bug, please supply a [runnable, stripped-down demonstration](http://fullcalendar.io/wiki/Reduced-Test-Cases/).
xackery/peq-expansions
405479511
Title: Zone: legacylavastorm Question: username_0: The general idea is that the original version of lavastorm will be renamed legacylavastorm to give it a unique name to the revamped version. All spell files, zone points, and location data will redirect to this legacy prefixed version until the expansion it was revamped (Prophecies of Ro) is enabled. When this occurs, we need to have in the Prophecies of Ro a sql file server owners can run to move players from the legacy version to their bind point. https://github.com/username_0/peq-expansions/blob/master/peq/zones/legacylavastorm.sql Answers: username_0: this idea has inherit issues, closing Status: Issue closed
OHIF/Viewers
959708269
Title: Viewers--ohif-viewer-3.11.11 load pdfjs has error Question: username_0: I downloaded the Viewers--ohif-viewer-3.11.11 version. After the command yarn install is executed successfully, an error is reported: ![TIM图片20210804083308](https://user-images.githubusercontent.com/56569509/128104000-eee3c251-dac5-42d9-a875-c4944da443f7.png)
wenzhixin/bootstrap-table
261569384
Title: Export the table will be how to change a data I need the data? Question: username_0: Hi! I would like to ask a question: ![k93fj9gsiv f m 12x](https://user-images.githubusercontent.com/32385451/31007552-80846010-a533-11e7-984d-d4653c3910bd.png) This column is like this: ![e4 tm6 n5 a5i55 i n e](https://user-images.githubusercontent.com/32385451/31007681-e7011400-a533-11e7-83da-4ffd2af8b957.png) What should I do to make it appear on HTML, but it will not be exported to EXCEL? --Thank you for your answers and wish you happy every day!<issue_closed> Status: Issue closed
microsoft/BotFramework-Composer
859589157
Title: Cannot include dialog files in a package that contains code generated from a csproj Question: username_0: ## Describe the bug Currently the package manager looks for an 'exported' folder in the root of a NuGet package to find any dialog / declarative files that should be installed into the bot project. However, if a NuGet package is generated from a CSPROJ file, which they often are (including in the community repo), the 'exported' folder will not be at the root, it will be located at 'content/exported'. Also, without this change, it won't be easy to have a package that contains both dialog files and code without a bunch of additional work for the developer. ***Note: I realize that a content only NuGet package can, and arguably should, be created using a nuspec file (so that the unnecessary dlls are not included), but we need to address the above to allow for mixed packages. This will be a blocker for the Teams / Channel Handoff Package currently in development, which will likely contain mixed content.*** ## To Reproduce Try installing the Bot.Builder.Community.Components.Dialogs.GetWeather package from the Community repo. ## Expected behavior When installing Bot.Builder.Community.Components.Dialogs.GetWeather, we should look for the exported folder within the content folder of the package and not just at the root. Potentially related to #6994 Answers: username_1: Look like I am facing the same issue in "Bot.Builder.Community.Dialogs.CustomAction" which contains CS + schema files My pc changed to BotComponent registration( community repro old concept) .. and Install the package .. no information in the bot composer ( package is not installed ) but in visual studio "Managed NuGet package" showing as package has installed. username_1: Raised one more bug in components projects https://github.com/microsoft/botframework-components/issues/913 (but not sure related to this)
DS4PS/cpp-529-fall-2020
740323775
Title: Count of Metropolitan Statistical Areas (MSAs) Question: username_0: Hello Dr Lecy I would like to ask if we are expected to choose 2 Metropolitan Statistical Areas (MSAs) as done in lab 04 instructions, or do we just have to pick one? Thanks Answers: username_1: One metro only. The example in the notes demonstrates building the cartogram for one metro area, but it is a metro that spans two states. Since the Census API only allows you to get tract-level data from one state at a time you need to make two calls to the API and then combine those separate dataset (and corresponding SF polygons) into a single metro area. It was selected as an example because several large metro areas in the US will cross state lines and you might need to adopt this approach. Does that make sense?
docker/compose
179530271
Title: Error recreating project in 1.8.x Question: username_0: I'm seeing issues when when recreating a project in docker-compose 1.8.x. The first deploy is successful - network, volume, containers are created. If I run the same command again I would expect to see "up-to-date". Instead, I'm getting this: Recreating 6d014264613b_jenkinsdev_web_1 ERROR: for web open /var/lib/docker/containers/6d014264613bbc148f2a549f2629596df36a7154ce1c725492449feeb6681b12/.tmp-config.v2.json464797388: no such file or directory Encountered errors while bringing up the project. When I run docker-compose I'm passing "-p jenkinsdev", but for whatever the reason, it's prepending part of the container ID to the project name. I've reverted back to docker-compose 1.7.1 and it works as expected. What could be the issue in 1.8? Server Version: 1.12.1 Storage Driver: devicemapper Pool Name: docker-pool Pool Blocksize: 65.54 kB Base Device Size: 53.69 GB Backing Filesystem: xfs Data file: Metadata file: Data Space Used: 2.031 GB Data Space Total: 96.63 GB Data Space Available: 94.6 GB Metadata Space Used: 4.678 MB Metadata Space Total: 5.365 GB Metadata Space Available: 5.36 GB Thin Pool Minimum Free Space: 9.663 GB Udev Sync Supported: true Deferred Removal Enabled: false Deferred Deletion Enabled: false Deferred Deleted Device Count: 0 Library Version: 1.02.107-RHEL7 (2015-12-01) Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: null host bridge overlay Swarm: inactive Runtimes: runc Default Runtime: runc Security Options: seccomp Kernel Version: 4.1.12-37.5.1.el7uek.x86_64 Operating System: Oracle Linux Server 7.2 OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.795 GiB Name: dock01 ID: AFBR:JHEB:PRM7:K3JU:DJ4V:2SW5:3SAC:2ZLB:BZ3G:UNJQ:MRXZ:FQ5N Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ docker-compose.dev.yaml ``` version: '2' services: [Truncated] - "8080:8080" - "49187:49187" volumes: home: ``` common.yaml ``` version: '2' services: web: environment: JENKINS_SLAVE_AGENT_PORT: 49187 image: private-repo/jenkins-master:${CI_BUILD_REF_NAME} restart: always volumes: - home:/var/jenkins_home ``` Answers: username_1: Can you try `docker-compose up --verbose` and share the output (on 1.8.1)? The ID appending happens when the container gets into a bad state while it's being recreated, but it's hard to say why that happens without more info. username_0: Here's the setup: I started with a successful run of 1.7.1. I then ran docker-compose 1.8.1 with the same compose file. This seems to have a problem with devicemapper [run1_with_existing_containers.txt](https://github.com/docker/compose/files/498702/run1_with_existing_containers.txt) I then ran docker-compose 1.8.1 again with the same compose file, and I get the error in my original description [run2_with_existing_containers.txt](https://github.com/docker/compose/files/498701/run2_with_existing_containers.txt) username_0: Actually, I need to clarify: In my original problem I said there were no changes and I expected up-to-date, but I determined there were some small changes to the docker image between runs so there was a reason to re-create the project. username_1: The `failed to remove root filesystem` error leads me to think this is related to the following engine issue: https://github.com/docker/docker/issues/21704 username_2: I got this error when I accidentally put docker-compose process in background. Status: Issue closed
wolfenste/tictactoe
338131053
Title: Document the game first Question: username_0: Create a directory `docs/` in the project root and inside a markdown document in which you describe how the game is played, winning conditions, everything. You can use later this document as a help, but the most important usage for you is textual analysis: you will extract from it nouns and verbs which you'll use to model your actors, classes and operations.<issue_closed> Status: Issue closed
scriban/scriban
1114079325
Title: array.sort for multiple members and/or stable sort algorithm Question: username_0: First of all, this is really an **AWESOME** project! I have the problem, that I have to sort an array by two members, first bei `DateAdded` and then by `Name` (both properties of the Member class). Since we can only pass a single member to `array.sort`, my first idea was to use ``` sorted = members | array.sort "Name" | array.sort "DateAdded" ``` BUT this doesn't work, since the internally used `List<T>.Sort` does NOT use a stable sorting algorithm (see [docs](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.sort)). The solution would be to use a stable sorting algorithm like the one used by [`Enumberable.OrderBy`](https://docs.microsoft.com/en-us/dotnet/api/system.linq.enumerable.orderby). One workaround would be to override the bulitin sort function with: ```csharp private class CustomBuiltinFunctions : BuiltinFunctions { public CustomBuiltinFunctions() { ArrayFunctions array = (ArrayFunctions)this["array"]; array.Remove("sort"); array.Import("sort", new Func<TemplateContext, SourceSpan, object, string, IEnumerable>(Sort)); } private static IEnumerable Sort(TemplateContext context, SourceSpan span, object list, string? member = null) { if (list == null) { return new ScriptRange(); } IEnumerable? enumerable = list as IEnumerable; if (enumerable == null) return new ScriptArray(1) { list }; var items = enumerable.Cast<object>(); if (!items.Any()) return new ScriptArray(); if (string.IsNullOrEmpty(member)) { items = items.OrderBy(x => x); } else { items = items.OrderBy(target => { IObjectAccessor accessor = context.GetMemberAccessor(target); if (!accessor.TryGetValue(context, span, target, member, out object value)) { context.TryGetMember?.Invoke(context, span, target, member, out value); } return value; }); } return new ScriptArray(items); } } ``` But in my opinion it would be of great value to use a stable sort algorithm by default. Since this wouldn't be a breaking change I would really welcome to see this fixed in the next release. What do you think? Even better would be to support sorting by multiple members, which would be much more efficient and performant. Answers: username_1: Possible, more complicated to bring, up to you 😉
spring-projects/spring-security
957794107
Title: Is there a way to check WebSession is expired for a current logged in session Question: username_0: I tried using WebFilter and GlobalFilter in SCG but no where it was able to check isExpired Flag Add a webFilter ordered after Authorization or First in WebFlux spring boot application subscribe to exchange.getSession() WebSession object. OR Add a Global Filter ordered -1 using exchange.getSession() WebSession Object using map check isExpired Flag. Is there a different way to check isExpired flag for WebSession in WebFlux spring boot application Answers: username_1: Thanks for getting in touch, but it feels like this is a question that would be better suited to [Stack Overflow](https://stackoverflow.com/). We prefer to use GitHub issues only for bugs and enhancements. Feel free to update this issue with a link to the re-posted question (so that other people can find it) or add a [minimal sample](https://stackoverflow.com/help/minimal-reproducible-example) that reproduces this issue if you feel this is a genuine bug. Status: Issue closed
Azure/azure-policy
834186175
Title: Configure Service Bus namespaces to use private DNS zones - does not assign identity to another subscription Question: username_0: <!-- Your feedback and support of these samples is greatly appreciated, thanks for contributing! **Note:** support for Azure Policy has transitioned to standard Azure support channels so this repository will no longer be monitored for support requests. Issues opened here are only to report specific problems with the samples published in this repository. Any other issues will be closed with a pointer to the README. Check [**here**](https://github.com/Azure/azure-policy#getting-support) for information about getting support for Azure Policy. ISSUE TITLE: Please prefix the issue title with the policy sample name, e.g. 'PolicyName: Short description of my issue' ISSUE DESCRIPTION (this template): Please provide information regarding your issue under each header below. Write N/A under any headers that do not apply to your issue, or if the information is not available. NOTE! Sensitive information should be obfuscated. PLEASE KEEP THE HEADERS. You may remove this comment block, and the other comment blocks, but please keep the headers. --> #### Details of the scenario you tried and the problem that is occurring Service bus policy: Configure Service Bus namespaces to use private DNS zones - it allows you to select the subscription and private dns zone name but does not assign the managed identity to the private dns zone (the subscription I selected is under a different management group, due to our hub and spoke setup). If you manually assign the managed identity to the dns zone it works No problem is encountered if the private dns zone is in the same management group. #### Verbose logs showing the problem #### Suggested solution to the issue #### If policy is Guest Configuration - details about target node <!-- Please provide as much as possible about the target node, for example edition, version, build and language. On OS with WMF 5.1 the following Powershell command can help get this information. Get-ComputerInfo -Property @( 'OsName', 'OsOperatingSystemSKU', 'OSArchitecture', 'WindowsVersion', 'WindowsBuildLabEx', 'OsLanguage', 'OsMuiLanguages') Version and build of PowerShell the target node is running To help with this information, please run this command: $PSVersionTable --> Answers: username_1: Hi @username_0, users are now allowed to select user assigned managed identities. This enables you to select a managed identity that is the require zone. Thanks. Status: Issue closed
bioThai/Deduper-bioThai
1028428396
Title: Review Question: username_0: Looks good! I rewatched the lecture and your duplicate checks seem to add up. Specifically, ` - Create tuple containing read_umi, bit16_flag, chromosome_name, adjusted_starting_position, and assign it to "key" variable. - Note: If bitwise flag of one read and bitwise flag of another read have the same bit-16 flag states (either BOTH have bit16_flag == TRUE, or BOTH have bit16_flag == FALSE), then they both have the same strandedness and possibly could be duplicates.` I also like how you export the data into the output sam file once moving on to the next umi. Sorting the file (UMI, chromosome, read position) before hand was a good call so you could implement this.
TeaSpeak/TeaSpeak
326829923
Title: spam lock ban can be enforced by guests Question: username_0: On Teamspeak if guests have permissions to create channels and create them fast (about 1 per second is enough) they can cause others to get antiflood-locked (because the other clients have to subscribe each channel manually) [it's an old "bug" that just hasn't been adressed by teamspeak "yet":tm:] https://www.youtube.com/watch?v=_hPCt-3auks On teamspeak it's annoying enough because everyone has to wait for their spam points to reach 0 after the malicous clients have been banned but on TeaSpeak a "automatic" ban is created for all client's that are not protected by `b_client_ignore_antiflood`. ![grafik](https://user-images.githubusercontent.com/3318223/40588250-235c9b1a-61ca-11e8-9626-827ad12a0223.png) Possible solutions: - Send all clients a `notifychannelsubscribed` after a channel was created so they don't need to subscribe them theirselves - Don't raise antiflood points on `channelsubscribe` (still raise them on `channelsubscribeall`) **P.S. Also why is the same ban rule created 3 times?** ![grafik](https://user-images.githubusercontent.com/3318223/40588209-6329ab94-61c9-11e8-8ffb-19082282521a.png) Answers: username_1: Well basically the flood points for create a channel are less than creating a channel, but i think if you're connect twice you could trigger that. Ill change there something username_1: Fixed within build 2 :) Status: Issue closed username_0: Awesome, which solution did you choose? username_1: Easy solution. Im logging the channel create time stamp and if the channel's age under 5 seconds its free to subscribe username_0: Can you check why the ban gets created 3 times tho? username_1: On Teamspeak if guests have permissions to create channels and create them fast (about 1 per second is enough) they can cause others to get antiflood-locked (because the other clients have to subscribe each channel manually) [it's an old "bug" that just hasn't been adressed by teamspeak "yet":tm:] https://www.youtube.com/watch?v=_hPCt-3auks On teamspeak it's annoying enough because everyone has to wait for their spam points to reach 0 after the malicous clients have been banned but on TeaSpeak a "automatic" ban is created for all client's that are not protected by `b_client_ignore_antiflood`. ![grafik](https://user-images.githubusercontent.com/3318223/40588250-235c9b1a-61ca-11e8-9626-827ad12a0223.png) Possible solutions: - Send all clients a `notifychannelsubscribed` after a channel was created so they don't need to subscribe them theirselves - Don't raise antiflood points on `channelsubscribe` (still raise them on `channelsubscribeall`) **P.S. Also why is the same ban rule created 3 times?** ![grafik](https://user-images.githubusercontent.com/3318223/40588209-6329ab94-61c9-11e8-8ffb-19082282521a.png) username_0: Also expired tempbans seem to not get deleted from the database username_0: Ahh okay, it could make the database very large tho. Let's see after some uptime username_2: I don't think size is a real problem here because of the small footprint of one ban-entry. Anyway. You could implement a config option for this or a command to remove old bans if its really a problem. username_0: You have to remember that a normal ban consists of two entries Status: Issue closed username_1: ban option implemented
bearscho/bearscho.github.io
365233240
Title: 엑셀 파일 업로드 Question: username_0: **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. [영화리스트.xlsx](https://github.com/username_0/username_0.github.io/files/2431685/default.xlsx) Status: Issue closed Answers: username_0: **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. [영화리스트.xlsx](https://github.com/username_0/username_0.github.io/files/2431685/default.xlsx) Status: Issue closed
AllTheMods/ATM3-Remix
536059110
Title: Unable to complete "a fiery projectile" from Thaumcraft Question: username_0: ## Expected Behavior Get hit either by : Ghast Fireball, Blaze, Fire Charge get unlock research in medium: projectile ## Current Behavior Can get hit without armor charms etc. still no unlock in thaumonomicon ## Possible Solution dunno you are the expert, maybe something wrong with the fireball? Maybe NetherEx dunno. ## Steps to Reproduce 1. get hit by Blaze , Ghast, FireCharge 2. No unlock Thaumonomicon: https://gyazo.com/6a0bee7835ee476ba8992e5b6ec8e635 https://gyazo.com/c5334d282bb88b9958dda1d670925311 Answers: username_1: change "replaceVanillaFireballs" to false in ebwizardry config https://github.com/AllTheMods/ATM3-Remix/issues/310 username_2: fixed in next version expect beta soon Status: Issue closed
drivendataorg/zamba
316267453
Title: re-train the first level models on the full dataset Question: username_0: In the original Cnnensemble implementation ensemble of L1 models have been used trained on 4 folds. It provides very limited if any accuracy gain while 4x slower, it's better to re-train the models on the whole dataset, including test data. Status: Issue closed Answers: username_1: Current model will be replaced by upcoming release.
aws-amplify/amplify-js
680332605
Title: DataStore SYNC query builder does not allows/include filter by owner Question: username_0: **Describe the bug** The DataStore is not including the `owner` as part of the sync query. Unless this is a limitation on how AppSync implements/expects sync, I think this is a BUG and causes scalability issues. **To Reproduce** 1. Install and init 2. Implement a simple schema with type having @model and @auth 3. Configure @auth to include owner 4. Configure DataStore **Expected behavior** The DataStore internally builds all queries and mutations. Since we have @auth with owner, it is expected that the `owner` field is populated on all queries, including sync queries. This is not the case. **Code Snippet** Here's where the SYNC query is built. The query does not allows for passing `filter`. https://github.com/aws-amplify/amplify-js/blob/main/packages/datastore/src/sync/utils.ts#L268-L278 ```graphql documentArgs = `($limit: Int, $nextToken: String, $lastSync: AWSTimestamp)`; ``` Answers: username_0: Let me expand on scalability. Since authorization filters are implemented on the Response Resolver, the result set to be filtered upon might or not include authorized items. Thus, the client can find itself getting 0 items after a sync event even though the actual query did hit 1k items, but the response resolver filtered them all out. By default the DataStore limits to 1k items for sync per page. For very small apps with few users, this can work as long as the items created by other users do not exceed 1k in between syncs. This won't obviously work on bigger payloads. username_0: The solution is simple, include a filter with the `owner`. username_1: hi @username_0 - thanks for reaching out. Would you be able to share the schema that you are using here? What is the default auth mode that you are using ? In the meantime, this workaround might be helpful -> https://github.com/aws-amplify/amplify-js/issues/5687#issuecomment-631185661 username_0: ```graphql type Channel @model @auth(rules: [ { allow: owner } ]) { id: ID! description: String owner: String } ``` username_1: Thank you for providing this. To understand the full picture, it would be helpful to know the default auth mode you are using? username_0: Oh, apologies I did not understood the question. I had selected Cognito User Pool. <img width="591" alt="Screen Shot 2020-08-20 at 3 23 18 PM" src="https://user-images.githubusercontent.com/1036734/90815692-15e0a780-e2f9-11ea-9c3b-26521a3efe0b.png"> username_0: any news? username_2: Hey @username_0, you will be able to pass a filter to the sync query once the feature in [this PR](https://github.com/aws-amplify/amplify-js/pull/7001) gets released Status: Issue closed
phoenixframework/phoenix_live_view
509560469
Title: dblclick binding Question: username_0: I needed a double-click binding on an element and ended up using a hook for it, but I was wondering what your view is on adding a `phx-double-click` binding. Happy to work on a PR if you think it might be a good addition! Answers: username_1: @username_0 Appreciate the input! Yes `hooks` are a good spot for them atm. As we are under active development and "how and why" we do things may change, I'll make sure to put this on the back burner. Just curious, what does your use case look like? I think this could be a great spot for the examples [repo](https://github.com/chrismccord/phoenix_live_view_example) in the meantime so others could take from your work. Status: Issue closed username_0: No real use case, I am writing a tutorial on how to recreate the TodoMVC with LiveView 😆 I'll try to submit an example next week!
unisonweb/unison
678092507
Title: Auto-format codebase Question: username_0: When Unison started none of the Haskell auto-formatters were that amazing. So I understand why it's hand-formatted. However, the latest generation of formatters are really quite good. I find [ormolu](https://hackage.haskell.org/package/ormolu) especially nice, though it doesn't really matter which one you pick. Sure, sometimes they make ugly decisions, but in return you get: **(a)** A faster coding experience (no hand-formatting imports!) and **(b)** consistency (whether you like the decisions or not, at least it's the same everywhere) Is this something the core Unison devs are interested in? PS: If you do go with ormolu you can still hand-format specific parts of code by wrapping it with `{- ORMOLU_DISABLE -}` and `{- ORMOLU_ENABLE -}`. This is occasionally helpful for things like enormous pattern matches where hand-formatting the columns can help readability. Answers: username_1: I'd be interested. How would we go about introducing such a thing, though? Do we... pause development and reformat the whole codebase all at once? (For PR diff readability purposes, it would be nice to have the big diffs behind us.) Or reformat one file at a time, slowly, for a while? One function at a time? Git pre-commit hooks? Re. `{- ORMOLU_DISABLE -}` do we need to audit each change for readability, then go back and say "no wait, this is a mess"? And how frequently would we need to do that? username_0: A HIGHLY interesting idea, though I haven't used them for this yet. That might be especially good though in such a big project with lots of contributors. username_2: I'd be very pleased if Unison was hit by ormolu. As a contributor, it is _very_ hard for me to edit the code in such a way that the git diffs are as clear as possible. This is because I often have to move code around while digging into a problem to better understand it, even if by the end of it I'm only making a small tweak. So, it'd be a big, big time saver and headache-reliever :) As far as how to go about it - I disagree that the best way is to hit the codebase all at once. I think that could result in some painful merge conflicts for contributors with large outstanding patches. My recommendation: format the codebase slowly over time so as to ease the merge conflict pain for those who have unmerged work. if you are editing a module, first hit it with ormolu. If there's a diff, put that in its own commit called "ormolu". Your patch may touch a few modules - each time any formatting occurs, just amend the one "ormolu" commit. By the end, you'll have one commit containing all formatting changes, and only formatting changes, and this commit can be skipped over in code review. At least for a first pass, I don't think there's a ton of value in staying on top of the formatting with hooks and/or CI. It seems simple enough to remind others to please format the code before merging; if a mistake happens, sometimes one will end up with unrelated formatting changes in one's patch, and this could just be called out as such. Anyway, guess I had a lot to say about this :) For context, we use ormolu at $WORK, and the feeling I have over there when I open a module is "_yessss_, I can tear this place up, then snap it all back into place", whereas when I'm working on Unison code I have a feeling of "I better be really careful here because I don't want to put together a confusing or misleading diff" username_0: Great point, I may have been wrong to suggest hitting everything at once. Since it's open source how could we be sure there are no big outstanding patches? username_1: * Ormolu (vs something else)? Ormolu sounds good to me. There are no settings; we'll just learn how to read it. If I understood @runarorama, @pchiusano, they also don't care about the choice of formatter, only that we don't have to think about formatting anymore. * Hit everything at once? I think we should hit everything at once. Formatting the codebase slowly will eventually reach the same state, with the same implications for folks with sufficiently outstanding PRs. And can't the large outstanding patch also be formatted with ormolu before merging, resulting in a small diff again? @username_2 * Formatting hooks / CI I think it's gotta be enforced, or it will just drift and cause diff problems for the next guy. Am I wrong? I don't think I can just eyeball some code in a PR and notice whether it would be changed or unchanged by ormolu. Side note, ormolu doesn't know about default extensions #1644, so it needs an ugly command-line to run. But maybe https://fgaz.me/posts/2019-11-14-cabal-multiple-libraries/ to enforce that all packages are using the same `default-extensions`, + a script to launch ormolu with the corresponding GHC options? username_0: Yeah, this is annoying for sure. Unfortunately passing `-o -XTypeApplications` is only option unless we remove `TypeApplications` from all the `default-extensions`, which would also be annoying. username_2: I think you would want to put `TypeApplications`, `PatternSynonyms`, and the other handful of syntax-stealing extensions at the top of every file, if the codebase is meant to be formatted with `ormolu`. This makes it so contributors (presumably) only need to configure a hotkey in their editor to run `ormolu` over the current buffer, without being project-aware (e.g. oh this is a unison repo - need to pass `-o TypeApplications`)
networkx/networkx
672900964
Title: edgelist in draw_network does not support arrays Question: username_0: Using numpy 1.18.2 and networkx 2.4, it is not possible to use a numpy array for the `edgelist` argument of [``draw_networkx``](https://networkx.github.io/documentation/stable/reference/generated/networkx.drawing.nx_pylab.draw_networkx.html). It raises: ``` ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` I normally would not have opened an issue for that but I do not see the point of the current implementation, so in case it can be changed harmlessly to support numpy arrays, that might be a win-win. The issue comes from lines 575-579 on 2.4 ([lines 640-647](https://github.com/networkx/networkx/blob/master/networkx/drawing/nx_pylab.py#L640) on master) that read: ```python if edgelist is None: edgelist = list(G.edges()) if not edgelist or len(edgelist) == 0: # no edges! return None ``` I am not sure what the combination of ``not edgelist`` and ``len(edgelist) == 0`` is for, here, so I would propose to change it to ```python if edgelist is None: edgelist = list(G.edges()) elif len(edgelist) == 0: # no edges! return None ``` which should still do the job for all valid inputs I can think of and would also work with numpy arrays. Answers: username_0: OK, nice, I can make a PR if that helps. Iterators do not need to be supported, if I understand correctly, then, right? I can just use the ``elif len(edgelist) == 0:``? username_1: Yes, that is (I believe) the only change needed. It might be good to have a test to make sure that a numpy array works. username_0: PR is done and should be working (except for the CircleCI tests that fail because of an internal unrelated error, from what I can see) #4132 Status: Issue closed
smalltag/github-for-developers-7
122395836
Title: create a/b testing framework Question: username_0: *This text will be italic* **This text will be bold** **Everyone _must_ attend the meeting at 5 o'clock today.** * Item * Item - Item - Item - Item 1. Item 1 1. A 2. B 3. C 2. Item 2 3. Item 3 Here's an idea: why don't we take `SuperiorProject` and turn it into `**Reasonable**Pro [Visit GitHub!] (https://www.github.com) Answers: username_0: I :eyes: that :bug: and I :cold_sweat: :trophy:
swagger-api/swagger-ui
318621244
Title: Example(s) in Media object are not displayed. Question: username_0: | Q | A | ------------------------------- | ------- | Bug or feature request? | bug | Which Swagger/OpenAPI version? | 3.0.0 | Which Swagger-UI version? | 3.13.6 gcbff0251 | How did you install Swagger-UI? | npm run build | Which browser & version? | na | Which operating system? | na ### Demonstration API definition ```yaml info: title: Media Example Test version: v2 description: Media Example paths: /: get: operationId: main responses: '200': content: application/json: schema: type: integer example: 12 ``` ### Expected Behavior According to https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#mediaTypeObject `example` (or `examples`) should overwrite schema example(s). https://swagger.io/docs/specification/adding-examples/ agrees. ### Current Behavior The example is completely ignored and is not displayed. Answers: username_0: Ah, thanks. Will be watching #3437 closely. Status: Issue closed
DevotedMC/PrisonPearl
178155032
Title: Full spec for ExilePearl Question: username_0: ExilePearl "PrisonPearl for Pussies" PrisonPearl was a plugin designed to let players control justice on a Minecraft server. They could control chaos and imprison griefers, or enslave entire nations. What was created was a tool left entirely open to abuse, especially by large coordinated groups. Instead of making actions have consequences, it made total abuse entirely possible. ExilePearl tries to limit the downsides of a tool that allows banishment, with the upside of controlling griefers and criminals. --- Feature one: Exiling someone. Same as with PrisonPearl, simply kill someone with an enderpearl in your hotbar and you will receive an ExilePearl. That person who was exiled is now randomspawned instead of sent to the end. Their name is added to a flatfile and they will face a loss of permissions. Not sure if you want to plug into PEX for this or handle it internally. Feature two: Being exiled and loss of perms. Loss of privileges (for Devoted) when exiled, each of these needs to be toggleable in the config: Cannot break reinforced blocks. (Citadel can stop damage, use that) Cannot break bastions by placing blocks. (Might need bastion change) Cannot throw enderpearls at all. Cannot enter a bastion field they are not on. Same teleport back feature as worldborder. Cannot do damage to other players. Cannot light fires. Cannot light TNT. Cannot chat in local chat. Given a message suggesting chatting in a group chat in Citadel. Cannot use water or lava buckets. Cannot use any potions. Cannot set a bed. Cannot enter within 1k of their ExilePearl. Same teleport back feature as worldborder. Can use a /suicide command after a 180 second timeout. (In case they get stuck in a reinforced box). Cannot place noteblocks. Exiled players can still play, mine, enchant, trade, grind, and explore. Options we'll need in the config as well, but which we current want turned to off: Inability to mine. (Adventure mode) Inability to use potion stands. Inability to use enchanting tables. Dependency with Towny: Disallow players from entering towns they are not in. Dependency with Factions: Disallow players from entering enemy faction land. Feature three: Finding a raider who dumbly exiled his friend for some reason. When killed, an Exiled prisoner will broadcast their pearl's location to the murderer. Same feature as in the end with PrisonPearl. Feature four: PrisonPearl for now. I'm interested in keeping PrisonPearl in for extreme cases. This will be probably the highest pearl cost of all servers. The way I'm thinking it'll work is using a vault factory on an exile pearl to convert it to a prisonpearl. It'll cost 8 of each T3 material just to run the recipe. So we'll need a way to toggle ExilePearl or PrisonPearl on the pearl. I'm thinking a simple rename to PrisonPearl should be enough? Answers: username_1: PrisonPearls have to be created within the plugin, just calling the pearl a PrisonPearl doesn't necessarily do anything :p But toggling should be do-able. username_2: What about development? What did you decide? I see Gordon created his own branch in ExilePearl project. I don't think this will be reasonable to start development by few developers at the same time and do the same work :) username_0: Would you like to try and tackle the plugin changes outside of ExilePearl that will be needed? username_2: What do you mean, help with another plugins? Sure, if help is needed - just tell me and I will try to help. username_3: For the bastion requirement, I think Bastion will have to take ExilePearl as a dependency and do the check in the bastion player move event. username_2: I think better to make dependency to Bastion in ExilePearl. Bastion should be independent from plugin-addons. If needed better to introduce to Bastion new events and functions which allows other plugins to interact with it. username_3: Ok yeah you're probably right, however I don't think there's an API exposed to see whether someone is in a bastion field or not, so exposing a public API for that sort of thing would something good to do. Unless it exists and I'm just not seeing it. username_2: Probably Bastion plugin haven't special one function you need, but I'm sure it is providing helper methods and fields for this. You can implement your own method based on this (and probably make new API function in Bastion) This is how I worked with Bastion in CastleGates plugin: https://github.com/username_2/CastleGates/blob/master/src/main/java/com/aleksey/castlegates/bastion/BastionManager.java
wildfish/django-star-ratings
619554814
Title: Content Security Policy Errors Question: username_0: Using the django-csp package, I get the following errors in the browser console: Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self'. Either the 'unsafe-inline' keyword, a hash ('sha256-/hY9Yi5OpZMcXIYnTAVa79/aVAeq0Wn0Ahh5wBnXHd0='), or a nonce ('nonce-...') is required to enable inline execution. ``` <style> #dsr7828ac7bc96c4f578b1f749da1c31a73 .star-ratings-rating-full, #dsr7828ac7bc96c4f578b1f749da1c31a73 .star-ratings-rating-empty { width: 40px; height: 40px; background: url(/static/star-ratings/images/stars.png) no-repeat; background-size: 120px; } #dsr7828ac7bc96c4f578b1f749da1c31a73 .star-ratings-rating-empty { background-position: -40px 0; } </style> ``` `<ul class="star-ratings-rating-foreground" style="width: 60%">`
rnaseq/degraTrans
127499001
Title: Genome builds: filter. Question: username_0: With the latter genome builds, it is suggested to discard alt. contigs. Answers: username_1: To progress with the mapping, we need to upload links to the genome assemblies - we only need (hopefully) 4 links which are GRCh38 back through NCBI34 username_0: Here are the links to the genome assemblies for NCBI34, NCBI35, NCBI36, GRCh37 and GRCh38. Human chromosomes to be downloaded: 1-22, X,Y. NCBI34 (release22) ftp://ftp.ensembl.org/pub/release-22/human-22.34d/data/fasta/dna/Homo_sapiens.NCBI34.may.dna.chromosome.*.fa.gz NCBI35 (release 26 - release 37 incl.) ftp://ftp.ensembl.org/pub/release-37/homo_sapiens_37_35j/data/fasta/dna/Homo_sapiens.NCBI35.feb.dna.chromosome.*.fa.gz NCBI36 (release 38 - release 54 incl.) ftp://ftp.ensembl.org/pub/release-54/fasta/homo_sapiens/dna/Homo_sapiens.NCBI36.54.dna.chromosome.*.fa.gz GRCh37 (release 55 - release 75 incl.) ftp://ftp.ensembl.org/pub/release-75/fasta/homo_sapiens/dna/Homo_sapiens.GRCh37.75.dna.chromosome.*.fa.gz GRCh38 (release 76 - release 83 incl.) ftp://ftp.ensembl.org/pub/release-83/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna.chromosome.*.fa.gz username_0: The relevant links to the genome assemblies for NCBI34/35/36, GRCh37/38 are available in a text file under datasets/dna. username_1: Suggest changes to the links as below. Also, we will need to filter the GTF files to only include genes/transcripts that are in our genome assembly files (particularly relevant for GRCh37 and 38) NCBI34 (release22 - release 25 incl.) ftp://ftp.ensembl.org/pub/release-25/human-25.34e/data/fasta/dna/Homo_sapiens.NCBI34.sep.dna.chromosome.*.fa.gz + ftp://ftp.ensembl.org/pub/release-25/human-25.34e/data/fasta/dna/Homo_sapiens.NCBI34.sep.dna.contig.fa.gz NCBI35 (release 26 - release 37 incl.) ftp://ftp.ensembl.org/pub/release-37/homo_sapiens_37_35j/data/fasta/dna/Homo_sapiens.NCBI35.feb.dna.chromosome.*.fa.gz + ftp://ftp.ensembl.org/pub/release-37/homo_sapiens_37_35j/data/fasta/dna/Homo_sapiens.0.NCBI35.feb.dna.contig.fa.gz NCBI36 (release 38 - release 54 incl.) ftp://ftp.ensembl.org/pub/release-54/fasta/homo_sapiens/dna/Homo_sapiens.NCBI36.54.dna.toplevel.fa.gz GRCh37 (release 55 - release 75 incl.) ftp://ftp.ensembl.org/pub/release-75/fasta/homo_sapiens/dna/Homo_sapiens.GRCh37.75.dna.primary_assembly.fa.gz GRCh38 (release 76 - release 83 incl.) ftp://ftp.ensembl.org/pub/release-83/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna.primary_assembly.fa.gz
stulzq/kong-plugin-rate-limiting-ex
434047059
Title: Does not work for upgrade to kong-1.0.0 and later version Question: username_0: 'kong.tools.responses' has been removed from kong-1.0.0 or newer versions. This plugin program needs to change Steps To Reproduce issue: 1. Use kong-1.0.0 or newer versions 2. Add this plugin to /etc/kong/kong.conf 3. kong start --vv, and get below error nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/tools/utils.lua:584: .../share/lua/5.1/kong/plugins/rate-limiting-ex/handler.lua:5: module 'kong.tools.responses' not found:No LuaRocks module found for kong.tools.responses no field package.preload['kong.tools.responses'] no file './kong/tools/responses.lua' no file './kong/tools/responses/init.lua' no file '/usr/local/openresty/site/lualib/kong/tools/responses.ljbc' no file '/usr/local/openresty/site/lualib/kong/tools/responses/init.ljbc' no file '/usr/local/openresty/lualib/kong/tools/responses.ljbc' no file '/usr/local/openresty/lualib/kong/tools/responses/init.ljbc' no file '/usr/local/openresty/site/lualib/kong/tools/responses.lua' no file '/usr/local/openresty/site/lualib/kong/tools/responses/init.lua' no file '/usr/local/openresty/lualib/kong/tools/responses.lua' no file '/usr/local/openresty/lualib/kong/tools/responses/init.lua' no file '/usr/local/openresty/luajit/share/luajit-2.1.0-beta3/kong/tools/responses.lua' no file '/usr/local/share/lua/5.1/kong/tools/responses.lua' no file '/usr/local/share/lua/5.1/kong/tools/responses/init.lua' no file '/usr/local/openresty/luajit/share/lua/5.1/kong/tools/responses.lua' no file '/usr/local/openresty/luajit/share/lua/5.1/kong/tools/responses/init.lua' no file '/root/.luarocks/share/lua/5.1/kong/tools/responses.lua' no file '/root/.luarocks/share/lua/5.1/kong/tools/responses/init.lua' no file '/usr/local/openresty/site/lualib/kong/tools/responses.so' no file '/usr/local/openresty/lualib/kong/tools/responses.so' no file './kong/tools/responses.so' no file '/usr/local/lib/lua/5.1/kong/tools/responses.so' no file '/usr/local/openresty/luajit/lib/lua/5.1/kong/tools/responses.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file '/root/.luarocks/lib/lua/5.1/kong/tools/responses.so' no file '/usr/local/openresty/site/lualib/kong.so' no file '/usr/local/openresty/lualib/kong.so' no file './kong.so' no file '/usr/local/lib/lua/5.1/kong.so' no file '/usr/local/openresty/luajit/lib/lua/5.1/kong.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file '/root/.luarocks/lib/lua/5.1/kong.so' stack traceback: [C]: in function 'error' /usr/local/share/lua/5.1/kong/tools/utils.lua:584: in function 'load_module_if_exists' /usr/local/share/lua/5.1/kong/db/dao/plugins.lua:222: in function 'load_plugin_schemas' /usr/local/share/lua/5.1/kong/init.lua:343: in function 'init' init_by_lua:3: in main chunk Answers: username_1: Hello, have received your feedback, I will fix it。 username_0: Thanks heaps. Looking forward to it. username_2: @username_1 Do you have any updates on this?
ezefranca/hackathon-instagram-track
149296736
Title: Went to HackDFW last weekend. ayy #hackdfw #2016 #hack #hackathon #dallas #gilleysdallas #mlh by tricksterwiz Question: username_0: <img src="http://ift.tt/23Ue5PE">Went to HackDFW last weekend. ayy &#128556;&#128556;&#128556; #hackdfw #2016 #hack #hackathon #dallas #gilleysdallas #mlh by tricksterwiz<br><br> April 18, 2016 at 07:33PM<br> via Instagram http://ift.tt/1VeMozu<br><br>
nickschot/ember-mobile-menu
581231637
Title: Lazy render menus Question: username_0: Investigate: 1. Just in time rendering 2. Deferred initial rendering 3. ??? Answers: username_0: cc @nullvoxpopuli username_0: Somewhat relevant to this subject: with empress-blog-hummingbird-template I've noticed that the initial load of a route takes more time. It would be nice to have a feature which blanks the content until after render and then fades in
facebook/prophet
689993535
Title: How to use more features on fbprophet? Question: username_0: I want to train my model with more features along Timestamp. Is there any way to include other features too? I searched a lot and failed to find out. Answers: username_1: This is in the official documentation webpage [here.](https://facebook.github.io/prophet/docs/seasonality,_holiday_effects,_and_regressors.html#additional-regressors) username_0: I thought it's only for adding Holidays. Is it forecasting all columns? username_0: I thought it's only for adding Holidays. So, I can just add and feed those columns? So, is it forecasting all columns too? username_1: No, the model is only built for predicting a single time series. You'd need future values for separate projections for each additional feature you include. Status: Issue closed
ActivisionGameScience/assertpy
318782782
Title: add dict include param Question: username_0: The `is_equal_to()` assertion supports `ignore` param when verifying dicts, like this: ```py assert_that({'a':1,'b':2}).is_equal_to({'a':1}, ignore='b') # ignore a multiple keys by passing a list of keys assert_that({'a':1,'b':2,'c':3}).is_equal_to({'a':1}, ignore=['b','c']) # ignore nested keys by passing a tuple assert_that({'a':1,'b':{'c':2,'d':3}}).is_equal_to({'a':1,'b':{'c':2}}, ignore=('b','d')) ``` Need to add `include` param that does the opposite, like this: ```py assert_that({'a':1,'b':2}).is_equal_to({'a':1}, include='a') # include multiple keys by passing a list of keys assert_that({'a':1,'b':2,'c':3}).is_equal_to({'a':1,'b':2}, include=['a','b']) # include nested key by passing tuple assert_that({'a':1,'b':{'c':2,'d':3}}).is_equal_to('b':{'d':3}}, include=('b','d')) ```<issue_closed> Status: Issue closed
sitespeedio/sitespeed.io
189152738
Title: generated the following error in Browsertime BrowserError: unknown error: cannot create temp dir for user data dir Question: username_0: Im trying to run sitespeed on AWS linux - behind proxy.. and getting this error generated the following error in Browsertime BrowserError: unknown error: cannot create temp dir for user data dir. <!-- Thanks for reporting issues back to sitespeed.io! Make sure you run the latest stable version, we move quite fast and fixes things. To make it possible for us to help you please explain carefully how we can reproduce your problem. PLEASE make it possible for us to reproduce the problem! One last time: Explain carefully how we can reproduce. --> Answers: username_1: Hey @username_0 this looks like it comes direct from the browser. To be able to try to reproduce I need to know what browser you are using, how you run it. And does it happen all the time, once etc? Best Peter Status: Issue closed username_1: Closing, let is know if it is still an issue.
participedia/usersnaps
907471169
Title: Usersnap Feedback - Spam - Please block user. I hid this entry. Question: username_0: **Sender**: <EMAIL> **Comment**: Spam - Please block user. I hid this entry. [Open #1076 in Usersnap Dashboard](https://usersnap.com/a/#/participedia-dev/p/participedia-net-9cb6d041/1076) <a href='https://usersnappublic.s3.amazonaws.com/2021-05-31/15-26/eef15612-43ae-4891-91cf-8e1649807d23.png'>![Usersnap Feedback - Spam - Please block user. I hid this entry.](https://usersnappublic.s3.amazonaws.com/2021-05-31/15-26/96075dd1-ae43-45c3-bc59-db04867730ac.jpg)</a> <a href='https://usersnappublic.s3.amazonaws.com/2021-05-31/15-26/eef15612-43ae-4891-91cf-8e1649807d23.png'>Download original image</a> **Browser**: Firefox 88 (Windows 10) **Referer**: [https://participedia.net/organization/7560](https://participedia.net/organization/7560) **Screen size**: 1760 x 1100 **Browser size**: 1760 x 949 Powered by [usersnap.com](https://usersnap.com/?utm_source=github_entry&utm_medium=web&utm_campaign=product) -- A friendly reminder from Usersnap 👋 There’s a new widget in town. Make your bug reporting process sleeker and more advanced by changing to the new project template. Learn more about how to transfer your project (and what new features you’ve been missing out on) in this [blog post](https://usersnap.com/blog/platform-new-widget/). Answers: username_1: @username_0 this one too, thank you. username_0: Test plans **BLOCK CASE** 1. create a new account with an email like <EMAIL> 2. login to Auth0 to manually verify the user. 3. login to Participedia and create new entry. 4. ask me to block the account 5. try creating a new entry with this blocked account **DELETE CASE** 1. create a new account with an email like <EMAIL> 2. login to Auth0 to manually verify the user. 3. login to Participedia and create new entry. 4. ask me to delete the account 5. try creating a new entry with this deleted account Both of these cases should not allow the user to create any more entry after they're blocked or deleted. username_0: After blocking the user, but before signing out, I was able to submit a case, but attempting to submit a case after subsequent login, I got logged out. @username_1 are you sure that blocking the user on Auth0 has always worked? We call Auth0 only when the user sign in. If they signup, and never log out. Just close the window or leave the window open. Even if they're blocked from Auth0, they're still logged in in Participedia's session. That's how they're able to add new entries still. This logic is not new. There are a few ways to fix this. I suggest we make a verification call to Auth0 when the user posts an entry to see if they're blocked/deleted from Auth0. username_0: @username_1 how high priority is this? Sounds like we should get this done ASAP to get rid of this <EMAIL> user. username_1: Yes, high pri. Thanks @username_0 ! Status: Issue closed
abclinuxu/abclinuxu
880664242
Title: prístup cez https dáva stránky s elementami používajúcimi http (Bugzilla Bug 1249) Question: username_0: This issue was created automatically with bugzilla2github # Bugzilla Bug 1249 Date: 2009-04-03 08:04:58 -0400 From: <NAME> &lt;<<EMAIL>>&gt; To: <NAME> &lt;<<EMAIL>>&gt; Last updated: 2010-09-25 20:09:13 -0400 ## Comment 4048 Date: 2009-04-03 08:04:58 -0400 From: <NAME> &lt;<<EMAIL>>&gt; Pri prítupe na username_0.cz cez https je časťou stránky niečo (pravdepodobne odkaz na pagead2.googlesyndication.com ?) čo sa prenáša cez http a nie https. Následkom je hláška od firefoxu: "You have requested an encrypted page which contains some unencrypted information ...". Dá sa s tým niečo spraviť? (okrem zakázania toho varovania) ## Comment 4081 Date: 2009-04-19 07:32:58 -0400 From: <NAME> &lt;<<EMAIL>>&gt; neda. U reklamky (arbomedia) jsme s nimi jednali, ale marne.
cypress-io/cypress
558201427
Title: Searching for a solution to replace DOM elements Question: username_0: <!-- Is this a question? Questions WILL BE CLOSED. Ask in our chat https://on.cypress.io/chat --> ### Current behavior: As we know cypress doesn't support browser tab changes and for link we can change the target to _self, my Problem is with button, because button has no target attribute. ### Desired behavior: I am searching for a method to change button to link. I write the code below as command and I am using it but it say ```Cannot read property 'replaceChild' of undefined ``` ### Test code to reproduce command.js ``` Cypress.Commands.add('changeButton', (element) => { cy.get(`.${element}`).then(($el) => { const aEl = document.getElementsByClassName('offer-tile__button offer-tile__button--success'); const newEl = document.createElement('a'); newEl.setAttribute('data-cy', 'offer-tile-to-booking'); newEl.setAttribute('class', 'offer-tile__button--success'); newEl.setAttribute('href', 'https://travel-test.lan/1522568'); newEl.setAttribute('target', '_self'); newEl.innerHTML = '<NAME>'; aEl.parentNode.replaceChild(newEl, aEl); }); ``` test.ts ``` Then('change the button to link', () => { cy.changeButton('offer-tile__button--success') }); ``` <!-- You can fork https://github.com/cypress-io/cypress-test-tiny repo, set up a failing test, then link to your fork --> ### Versions "cypress": "3.8.1" Answers: username_1: Issues in our GitHub repo are reserved for potential bugs or feature requests. This issue will be closed since it appears to be neither a bug nor a feature request. We recommend questions relating to how to use Cypress be asked in our [community chat](https://gitter.im/cypress-io/cypress). Also try searching our [existing GitHub issues](https://gitter.im/cypress-io/cypress/issues), reading through our [documentation](https://docs.cypress.io), or searching [Stack Overflow](https://stackoverflow.com/questions/tagged/cypress) for relevant answers. Status: Issue closed
tjoskar/ng-lazyload-image
246493386
Title: scrollObservable question Question: username_0: Thank you for such a great component. I've been able to utilize it in so many different scenarios, and it works great. However, I've run into a case where I need to load a new photo on the page without re-initializing the page. All the data bindings for the model on the page are using the async pipe, so when the data changes everything changes as expected...except the updated image because it's not loading. I've been reading your source code, and I'm trying understand how the pass my own scrollObservable.... My first attempt is somthing like this: ``` @ViewChild('detailContainer', {read: ElementRef}) scrollContainer:ElementRef; // ref to old scollTarget lazyLoadImage$ : Subject<ElementRef> = new Subject(); ngOnInit() { this.route.data.shareReplay().subscribe(({facility})=>{ this.lazyLoadImage$.next(this.scollContainer); //Every time a new facility is loaded, emit a new value to the scrollObservable }); ``` I feel like I'm going about this completely wrong. Could you give me an example how what to pass to scrollObservable. Thanks! Answers: username_1: This is unfortunately not supported. If I understand you correctly, you have a couple of URLs to images that you are loading with the async pipe and you want to lazyload the images when the URL is ready or even replace the image if the URL changes. Let's say we have the following template: ```html <div [lazyLoad]="'https://someimages.com/'+(facility | async)'" [scrollObservable]="scrollObservable"></div> ``` When the template is loaded this directive (ng-lazyload-image) will kick in and check if the image (div) is in viewport, if it is, the directive will try to lazy load the image, which is at this point: 'https://someimages.com/' so we will fail and unsubscribe the (scroll)observable and will not try to load the image again, even though the url changes (facility emits). This should, however, be quite easy to implement, see my comment here https://github.com/username_1/ng-lazyload-image/issues/140#issuecomment-290899163 I maybe will be able to implement it next week but I will gladly accept pull requests :) Let me know if I completely misunderstood you ;) username_0: Hi @username_1, Thanks for your reply. Ok, I will try to give it a shot next week if I have time. I need this functionality for a project, so it's a good opportunity to contribute....but I might need some guidance. Status: Issue closed username_1: Hi @username_0, Can you update to version `3.3.1` to see if it solves your issue? You should not need to include any observable, just: ```html <div [lazyLoad]="'https://facilityimageapi.imgix.net/'+(facility | async)?.thumbnail+'?auto=format&w=600&h=auto&q=60'"></div> ``` username_0: Hi @username_1, Wow! You're a man of your word! Seriously, I didn't' expect you to actually fix this, but this is a pleasant surprise. I did fix my problem..90%. It's workable solution completely, so thank you! If there is one improvement I might suggest, hear me out: Here is exactly how I'm using it: ``` <div class="image-container"> <img class="pixel-image" [src]="'https://imageapi.net/'+(facility | async)?.thumbnail+'?auto=format&w=800&h=auto&q=1&blur=50'" alt=""> <div class="full-image" [lazyLoad]="'https://imageapi.net/'+(facility | async)?.thumbnail+'?auto=format&w=600&h=auto&q=50'" [scrollTarget]="detailContainer"></div> </div> ``` So, when the page initially loads everything works as expected. If the user is on a slow connection, they will first see the `<img>` which is a very very small size,0 blurred image that will load instantly, and then the full image is lazy-loaded in. This behavior worked before your update. With your update, when the `facility` model changes dynamically on the page, the image does switch now to the current model. So that's a huge improvement. However, ideally I want the user to see the updated blurred image first while the new `facility` image is loaded in(Especially important for mobile users). I ran some test, and I think I pointed the problem to the `.ng-lazyloaded` class that is applied to the lazy loaded div once the image is fully loaded. If you removed the `ng-lazyloaded` class when you detect a new image is being loaded, and then re-apply once the image has loaded, this should give the expected behavior I was looking for. Thanks again for your work, this is a great library. username_1: Nice that you get it to work! But I still think you are right, we should reset the class `ng-lazyloaded` when we updating the image. username_1: Thank you for such a great component. I've been able to utilize it in so many different scenarios, and it works great. However, I've run into a case where I need to load a new photo on the page without re-initializing the page. All the data bindings for the model on the page are using the async pipe, so when the data changes everything changes as expected...except the updated image because it's not loading. I've been reading your source code, and I'm trying understand how the pass my own scrollObservable.... My first attempt is somthing like this: ```js @ViewChild('detailContainer', {read: ElementRef}) scrollContainer:ElementRef; // ref to old scollTarget lazyLoadImage$ : Subject<ElementRef> = new Subject(); ngOnInit() { this.route.data.shareReplay().subscribe(({facility})=>{ this.lazyLoadImage$.next(this.scrollContainer); //Every time a new facility is loaded, emit a new value to the scrollObservable }); ``` Then in the template: ```html <div class="full-image" [lazyLoad]="'https://facilityimageapi.imgix.net/'+(facility | async)?.thumbnail+'?auto=format&w=600&h=auto&q=60'" [scrollObservable]="lazyLoadImage$"></div> ``` I feel like I'm going about this completely wrong. Could you give me an example of what to pass to scrollObservable. Thanks! UPDATE: I think this is closer, but lazy loaded images(except the first one) are being loaded. Here's my code: ```js @ViewChild('detailContainer', {read: ElementRef}) scrollContainer:ElementRef; // ref to old scollTarget lazyLoadImage$ : Subject = new Subject(); ngOnInit() { this.route.data.shareReplay().subscribe(({facility})=>{ this.lazyLoadImage$.next(this.scrollContainer); //Every time a new facility is loaded, emit a new value to the scrollObservable }); ngAfterViewInit(){ this.scrollObservable = Observable.merge(Observable.fromEvent(this.scrollContainer.nativeElement, 'scroll'), this.lazyLoadImage$); } ``` Then in my template: ```html <div class="full-image" [lazyLoad]="'https://facilityimageapi.imgix.net/'+(facility | async)?.thumbnail+'?auto=format&w=600&h=auto&q=60'" [scrollObservable]="scrollObservable"></div> ``` Any ideas why this isn't working....even when I try to force load with Next on my subject? Status: Issue closed username_1: This was be fixed in `3.3.3`.
naser44/1
161087903
Title: مسابقة رمضان شعلة نصر لمساعدة العوائل السورية الهاربة من جحيم الحرب Question: username_0: <a href="http://ift.tt/1WWXZDW">&#1605;&#1587;&#1575;&#1576;&#1602;&#1577; &ldquo;&#1585;&#1605;&#1590;&#1575;&#1606; &#1588;&#1593;&#1604;&#1577; &#1606;&#1589;&#1585;&rdquo; &#1604;&#1605;&#1587;&#1575;&#1593;&#1583;&#1577; &#1575;&#1604;&#1593;&#1608;&#1575;&#1574;&#1604; &#1575;&#1604;&#1587;&#1608;&#1585;&#1610;&#1577; &#1575;&#1604;&#1607;&#1575;&#1585;&#1576;&#1577; &#1605;&#1606; &#1580;&#1581;&#1610;&#1605; &#1575;&#1604;&#1581;&#1585;&#1576;</a>
abacritt/angularx-social-login
358098615
Title: PWA integration Question: username_0: Hi, Sorry I am using an issue tracking for the question. But I am wondering if you tested the functionality with progressive web apps and if not, do you think it can be integrated with pwa? Thank you, Alex Answers: username_1: Duplicate of #6. Status: Issue closed
ray-project/ray
700629527
Title: [rllib] Unregistered state-preprocessor variables Question: username_0: <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is the problem? It is not clear to me where the variables of a state-preprocessor are registered. I.e. the core "forward" layers. Assume I use sac (or ddpg) in discrete mode with a VisionNet as a state-preprocessor. Those variables are not registered as action_model variables, nor as Q variables. Is it possible that they are not being trained? *Ray version and other system information (Python version, TensorFlow version, OS):* python3.6, ray 0.8, ubuntu 18 ### Reproduction (REQUIRED) Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments): If we cannot run your script, we cannot fix your issue. - [ ] I have verified my script runs in a clean environment and reproduces the issue. - [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html). Answers: username_1: Shouldn't they be registered by default by being members of nn.Module? You can probably check the .variables() list to see if they are indeed registered. username_0: Forgive the misleading title. They are registered but do not belong to any gradient. The sac implementation defines its gradients over a selected list of variables. For example, the actor gradient is defined explicitly over the actor variables (see `gradients_fn` in sac_tf_policy.py`). Those are the weights that are returned by `policy_variables` in `sac_tf_model.py`. However, this contains just the weights of the output heads and not the weights of the state-preprocessor. While there are 24 registered variables, most of them belong to the cnn, the gradient of the policy is defined over 4 weights only. username_1: Good point. That does look like a problem, @username_2 can you have a closer look? username_2: It makes sense that the preprocessor variables aren't part of the model, as intended in [data pipeline](https://docs.ray.io/en/latest/rllib-models.html). Is it possible you can move this preprocessor the model (i.e. define a custom model)? username_0: The current data pipeline supports a shared body configuration because the state-preprocessor is assumed to be shared between the actor and the critic. I think this should be left as a design choice. username_1: @username_2 , that pipeline is referring to the preprocessor outside the model. @username_0 is talking about the state_preprocessor network in SAC which is differentiable through the loss. username_3: Yes, this is a bug. The "preprocessor" here is not an RLlib Preprocessor object, but e.g. a simple Conv2D stack (chosen automatically if possible by RLlib) pre-attached to the Q- and policy models. Will fix this. ... username_3: Sorry, can you be more specific about the version of ray you are using, as well as some information on your config, tf/pytorch, etc..? username_3: @username_0 username_3: I do see all variables registered correctly for tf as well as all parameters included in the nn.Module for pytorch: `rllib\agents\sac\tests\test_sac.py::TestSAC::test_sac_compilation` framework=torch|tf env=MsPacmanNoFrameskip-v4 username_1: @username_3 see the comment https://github.com/ray-project/ray/issues/10765#issuecomment-696537955, it seems the vars are registered but not included in the grad calcuation? username_0: ray 0.8, TensorFlow 1. I think that this is also the case with ddpg. username_3: Ah, yes, sorry, checking the custom gradient functions ... username_3: Yeah, we don't calculate or apply gradients for these, true. Ok, I'll make the `self._actor_optimizer` take care of training the preprocessor vars. Any objections? username_3: https://github.com/ray-project/ray/pull/10959 username_3: DDPG still missing. Will PR (see above) tomorrow. username_0: Makes sense. But the main issue IMOP is to allow a separate body configuration. Another thing, you might want to take the minimum of the Qs in the discrete actor loss function. Nir Status: Issue closed username_0: <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is the problem? It is not clear to me if the variables of a state-preprocessor, i.e. the core "forward" layers, are assigned to a gradient. Assume I use sac (or ddpg) with a VisionNet as a state-preprocessor. Those variables are not registered as actor variables, nor as q variables. However, the actor (critic) gradient is selectively defined over actor (critic) variables only. Is it possible that the variables of the state-preprocessor are not being trained? *Ray version and other system information (Python version, TensorFlow version, OS):* python3.6, ray 0.8, ubuntu 18 ### Reproduction (REQUIRED) Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments): If we cannot run your script, we cannot fix your issue. - [ ] I have verified my script runs in a clean environment and reproduces the issue. - [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html). username_3: So true. We are planning on making Model-architecture configurations much simpler and more flexible in the future (~Q4, Q1 2021). What you should do if you want to separate into 2 Conv2D stacks is to just write a custom Model class that does this, then wrap the SACModel with that custom class. username_3: Great catch on the min Q for discrete actions! I'll fix that. username_4: I also had problems with this and assigned the variables to the critic loss, not the actor loss. My motivation for this was based on empirical evidence from the [deepmind control suite paper](https://arxiv.org/pdf/1801.00690.pdf). Figure 7 and 8 show that the network barely learns anything when the CNN variables are optimized as part of the actor loss. username_0: That's interesting. They say that separate bodies performed the worse, which is what I use now :( In my opinion, though, such conclusions should be treated with caution. I couldn't find Figure 8. I assume you referred to Fig7? Any objections to closing the issue? username_4: @username_0 : No objection to closing - thanks for your comment! Yes, sorry, I meant Figure 6 and 7. Did you have any success with training a pixel based policy other than on Atari? So far, its eluding me and I would be grateful for any hints which environments and algorithms work! username_0: It would be foolish of me to pretend to know the answer to this... But here is my two cents: Pixel-based policies usually go together with discrete actions. In this case, PPO is your go-to algorithm. As a trust-region algorithm, it will provide you with "safe" optimization. However, as an on-policy algorithm, you might be facing exploration challenges that are better to solve by tweaking the environment if possible. Status: Issue closed
project-sunbird/project-sunbird.github.io
279725856
Title: Difference when providing hyperlink reference Question: username_0: When providing hyperlink reference for image, the path takes the folder name and image name. However when providing hyperlink for pages, the link starts with the pages to folder to file name. Please rectify. For consitency, the path taken should be same. Either both the path for image and pages takes only folder name and image name or the link starts with pages to file name. In either case, it should be same for both. Answers: username_1: This is fixed and will be updated soon with Multi-version Documentation. Status: Issue closed
rails/rails
544467188
Title: Error installing Rails 6 with MYSQL Question: username_0: Postgres installs ok but not with mysql. Ruby: 2.7.0 Rails 6.0.2.1 OS: Ubuntu 18 Command: `rails new doo --api -d mysql` Results: ``` To see why this extension failed to compile, please check the mkmf.log which can be found here: /home/deploy/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/extensions/x86_64-linux/2.7.0/mysql2-0.5.3/mkmf.log extconf failed, exit code 1 Gem files will remain installed in /home/deploy/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/mysql2-0.5.3 for inspection. Results logged to /home/deploy/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/extensions/x86_64-linux/2.7.0/mysql2-0.5.3/gem_make.out An error occurred while installing mysql2 (0.5.3), and Bundler cannot continue. Make sure that `gem install mysql2 -v '0.5.3' --source 'https://rubygems.org/'` succeeds before bundling. In Gemfile: mysql2 run bundle binstubs bundler Could not find gem 'mysql2 (>= 0.4.4)' in any of the gem sources listed in your Gemfile. run bundle exec spring binstub --all Could not find gem 'mysql2 (>= 0.4.4)' in any of the gem sources listed in your Gemfile. Run `bundle install` to install missing gems. ``` Answers: username_1: I think your error is in install of gem mysql2 because I followed your command to install with ruby 2.6.5 and 2.7.0 and I have success in both. Command: `bundle exec rails new ~/Projects/test2-7 --api -d mysql` Ruby: 2.6.5 and 2.7.0 Rails 6.0.2.1 OS: macOS Catalina 10.15.2 Output result in 2.7.0: https://pastebin.com/Crw77iFG username_2: This is missing some of the information so I'm not sure what library mysql is choking on but this isn't a Rails bug. I had a whole bunch of issues with libxml2 and openssl on Catalina. This post may help you https://www.rakeroutes.com/bundle-install-for-rails-6/ Status: Issue closed username_3: I tried using ubuntu:18.04 in Dockerfile. ```console $ cat Dockerfile FROM ubuntu:18.04 RUN apt-get update RUN apt-get install -y \ ruby \ ruby-dev \ build-essential \ libmysqlclient-dev \ libssl-dev \ git RUN gem i rails RUN rails new doo --api -d mysql $ docker build -t rails_38138 . Successfully tagged rails_38138:latest ``` libmysqlclient-dev and libssl-dev are required to build mysql2 gem. username_0: Oh this works on Macs. I installed this on an Ubuntu server. Running the `new` command then `bundle install` I get the error. If it's a mysql gem issue then their is an issue with the install process using `-d mysql` command. username_4: Seems like problem installing mysql2 gem, feel free to open issue at https://github.com/brianmario/mysql2. Anyway looks not related to Rails itself.
backdrop/backdrop-issues
27180415
Title: SA-CORE-2013-001 - Multiple vulnerabilities Question: username_0: ## Security fixes in [Drupal 7.19] (https://drupal.org/drupal-7.19-release-notes) ### Cross-site scripting A reflected cross-site scripting vulnerability (XSS) was identified in certain Drupal JavaScript functions that pass unexpected user input into jQuery causing it to insert HTML into the page when the intended behavior is to select DOM elements. Multiple core and contributed modules are affected by this issue. jQuery versions 1.6.3 and higher provide protection against common forms of this problem; thus, the vulnerability is mitigated if your site has upgraded to a recent version of jQuery. However, the versions of jQuery that are shipped with Drupal 6 and Drupal 7 core do not contain this protection. Although the fix added to Drupal as part of this security release prevents the most common forms of this issue in the same way as newer versions of jQuery do, developers should be aware that passing untrusted user input directly to jQuery functions such as jQuery() and $() is unsafe and should be avoided. ### Access bypass A vulnerability was identified that exposes the title or, in some cases, the content of nodes that the user should not have access to. This vulnerability is mitigated by the fact that the bypass is only accessible to users who already have the 'access printer-friendly version' permission (which is not granted to Anonymous or Authenticated users by default) and it only affects nodes that are part of a book outline. ### Access bypass Drupal core provides the ability to have private files, including images. A vulnerability was identified in which derivative images (which Drupal automatically creates from these images based on "image styles" and which may differ, for example, in size or saturation) did not always receive the same protection. Under some circumstances, this would allow users to access image derivatives for images they should not be able to view. This vulnerability is mitigated by the fact that it only affects sites which use the Image module and which store images in a private file system. For reference, here is the [notice on drupal.org] (http://drupal.org/SA-CORE-2013-001) Answers: username_1: - [x] Cross-site scripting (double-checked) - [x] Access bypass 1 (double-checked) We actually later removed print functionality, though this fix was there too. :) - [x] Access bypass 2 (double-checked)
jdelles/library
978553002
Title: Deleting last entry issue Question: username_0: Hello! I was checking your project and it looks neat! However, I noticed that, when there is only 1 book left in the library, it is not removed from localStorage when the removeBookFromLibrary func is invoked. Would you like me to open a PR ? Answers: username_1: Sure! Status: Issue closed username_0: Hello! I was checking your project and it looks neat! However, I noticed that, when there is only 1 book left in the library, it is not removed from localStorage when the removeBookFromLibrary func is invoked. Would you like me to open a PR ? Status: Issue closed
KillerCodeMonkey/ngx-quill
529274041
Title: Refresh custom toolbar Question: username_0: Hi Bengt, When a custom toolbar is created, `select` elements are replaced by custom elements `span.ql-picker` and some `ql-picker-options`. I would like to know how I can trigger an update of this content when selects are build with `*ngFor` values that could change. I got 2 exemples to illustrate this implementations : - https://stackblitz.com/edit/ng-quill-editor-2 In that case, you have one custom configuration, the font size and line height list doesn't change, everything works fine. - https://stackblitz.com/edit/ng-quill-editor In that case, I tried to bind a different config for font size and line heights for each zones. It can't work because toolbar DOM is already initialized and the `option *ngFor="..."` is replaced by the custom select. I noticed that in order to work, I need the toolbar component to be in the DOM when my editor component is declared (because it uses `modules = { toolbar: '#domElem', ...`) so I can't just use a `*ngIf` on the toolbar or something like this to rebuild the toolbar automatically. Note, by writing this, I had the idea to wrap only the content of the toolbar in an *ngIf : https://stackblitz.com/edit/ng-quill-editor-3 But still, looks like I need to trigger the transformation from `select` to `ql-picker-options` :) It's not a bug but more a question about the lifecycle of the component :) Thanks for your awesome help (and I didn't have the time to add exmples on ngx-quill-exemples repo yet but that's still on my mind :)) Cheers Answers: username_1: to be honest, i never tried such things, :) But i know, that, because QuillJS is written in native JavaScript the toolbar has to be completly rendered before the Quill instance is created. The Quill instance in ngx-quill is done in the ngAfterViewInit lifecycle hook of angular. But i think you can just try to create your toolbar programmatically: https://github.com/quilljs/quill/blob/develop/modules/toolbar.js#L162 After onEditorCreated use the quill instance passed as parameter and get the toolbar module and add your controls :) username_0: Thanks for your reply. The main issue for me is that the DOM of my toolbar need to change as the select options won't be the same, not only buttons. I tried to recreate the toolbar that way https://stackblitz.com/edit/ng-quill-editor-4?file=src/app/editor/text-layer-editor.service.ts but doesn't seems to work as I use Parchment attributor styles it doesn't seems to declare them as Format. (Of course the doc of parchment is not very well documented :/) I tried to use the methods your pointed out too but I might not have a good approach. If you got any other leads I'm all ears :) Thanks ! username_1: Sorry but i think you tried everything i know :). Maybe ask this question directly in the quill repo? username_1: @username_0 so can this issue here be closed? Status: Issue closed username_0: Well, I think a found a lead, I'm gonna share this if it could help someone later : https://stackblitz.com/edit/ng-quill-editor-5 The idea was to wrap the toolbar component and the editor component in an `ngIf` to make sure it will re-render every time and you can pass new config options every time on toolbar creation. It's still a not 100% working on the exemple but I think I know how to fix that in my more complex project. :) Thanks for your time !
OpenPHDGuiding/phd2
90842814
Title: Request ability to zoom image Question: username_0: ``` requested for drift alignment where polar align circle is very small as alignment converges. could also be useful in other circumstances. ``` Original issue reported on code.google.com by `<EMAIL>` on 13 Apr 2015 at 6:26 Answers: username_1: related to #269
Thibok/Bilemo
438675680
Title: [Frontend] Authentication Question: username_0: **Authentication** -- Create route (/) [GET] -- Create SecurityController (Controller) -- Add loginAction (SecurityController method) -- Create login.html.twig (Security Twig view) -- Create User (Entity) -- Create FacebookClient (Client) -- Create FacebookRequester (Requester) -- Create UserManager (Manager) -- Create FacebookUserProvider (Provider) -- Create FacebookAuthenticator (Authenticator) -- Make tests **Estimation** -- 8h Status: Issue closed Answers: username_0: Feature finish #10 tests complete. 6h
CrunchyData/crunchy-containers
265360968
Title: implement pgadmin4-v2 on pg10 Question: username_0: I found various issues attempting to get this version to run... I first had to add these lines to pgAdmin.py at the top to get it to run... reload(sys) sys.setdefaultencoding("utf-8") I then had to hack the pre configured pgadmin.db to set the password.... so bottom line, there needs to be some investigation here to get this to work properly... Answers: username_0: as part of this exercise, we would also want to make the containers run as USER 26 on both kube and OCP...today this is out of sync due to the way pgadmin4 works and the security settings within OCP. username_1: @username_0 Ready for closing? username_0: merged Status: Issue closed
go-rillas/gor
295381487
Title: Only remove shebang if there is a shebang in the file Question: username_0: Create a test around these lines: https://github.com/go-rillas/gor/blob/9c6ef1a8c0c78bacb89998b1090050d89f3e6c4e/gor.go#L75-L76 Answers: username_0: added in https://github.com/go-rillas/gor/commit/f16d7578f40f2bfdc93ce989e45f9d7ab5ba195a Status: Issue closed
trimou/trimou
102977581
Title: NullPointer is thrown when providing index out of bound for JsonArray Question: username_0: There should be some more meaningful exception, which should inform about context and current size of array. How to reproduce: {{My.json.array.1}} and array will have only one element Answers: username_1: @username_0 I wonder if it woudn't be better to implement the same logic as `CombinedIndexResolver` does, i.e. don't render anything... maybe log a warning. However, NPE is definitely useless. Thanks for reporting the issue! username_0: Maybe it should be configurable? If it will render nothing, than in some cases it may fail silently. Let's imagine situation when I want to render all or nothing. And I want to see that json which I provided does not satisfy all template placeholders. Then exception will be good. Log a warning could be not enough for robust processing. So simple switch "fail on not resolved placeholder " may be beneficial. Just my opinion. username_1: We already have `MissingValueHandler` for null values (see also http://trimou.org/doc/latest.html#missingvaluehandler). But this is not specific to a particular resolver - it's invoked after the resolver chain returns null (i.e. no resolver was successful). username_0: Oh, this MissingValueHandler is a great idea. So returning null from resolver with logging is the best way to go. username_1: Fixed. Status: Issue closed
nilmtk/nilmtk
51846934
Title: WikiEnergy index is not monotonic Question: username_0: This is likely to cause problems in a lot of downstream functions. ``` python In [16]: from nilmtk import DataSet In [17]: ds = DataSet("/home/nipun/Desktop/wikienergy.h5") In [18]: store = ds.store In [19]: i = store.store['/building1/elec/meter1'].index In [20]: i.is_monotonic Out[20]: False In [21]: i = store.store['/building10/elec/meter1'].index In [22]: i.is_monotonic Out[22]: False ``` In comparison with ukdale, iawe ``` python In [23]: ds = DataSet("/home/nipun/ukdale.h5") In [24]: store =ds.store In [25]: i = store.store['/building2/elec/meter1'].index In [26]: i.is_monotonic Out[26]: True In [27]: i = store.store['/building1/elec/meter1'].index In [28]: i.is_monotonic Out[28]: True ``` Answers: username_1: For future reference on this old issue, I believe the root cause of this is just that PostgreSQL doesn't sort columns if there is no specific `ORDER BY` statement, it just dumps matching rows. Since WikiEnergy/Dataport has too much data, I won't work on it anytime soon unless other issues show up. Let's just keep the issue open... Status: Issue closed