repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
executablebooks/jupyter-book
979430267
Title: HTML for special content blocks other than admonitions Question: username_0: ### Description / Summary I love the addition of HTML admonitions! It makes my notebooks so much more readable when readers download them. Thank you! But I have other special content blocks, such as sidebar margin content and epigraphs, that I'd also love to format as HTML blocks to make them more readable, too. ### Value / benefit This would be another step toward making downloaded notebooks as readable and aesthetically-pleasing as their rendered Jupyter Book counterparts. ### Implementation details After a little experimentation, it already seems mostly possible to format sidebar content and epigraphs as HTML content in a notebook. For example, when I use this in a notebook... ``` <div class="margin sidebar" style="background: lightyellow; padding: 10px"> <p class="sidebar title">Google Breakdown</p> See Google's breakdown of the best [Windows Command-line Tools](https://developers.google.com/web/shows/ttt/series-2/windows-commandline) </div> ``` ...it correctly renders as sidebar content in the Jupyter Book. ![Screen Shot 2021-08-25 at 9 47 53 AM](https://user-images.githubusercontent.com/20325102/130831820-a0fa0b76-609e-4d88-8dab-8485ccf86770.png) The only issue here is that the yellow background gets included in the rendered style of the sidebar. It would be nice to have an option for background color in the notebook (to differentiate the margin content from other notebook content) that wouldn't get included in the Jupyter Book. Similarly, I can make an epigraph with HTML in a notebook... ``` <blockquote class="epigraph" style=" padding: 10px"> This is the meaning of our liberty and our creed; why men and women and children of every race and every faith can join in celebration across this magnificent Mall, and why a man whose father less than 60 years ago might not have been served at a local restaurant can now stand before you to take a most sacred oath. So let us mark this day with remembrance of who we are and how far we have traveled. <p class ="attribution">—<NAME>, Inaugural Presidential Address, January 2009 <p/> </blockquote> ``` ...that also renders (mostly) correctly in the Jupyter Book. ![Screen Shot 2021-08-25 at 9 50 11 AM](https://user-images.githubusercontent.com/20325102/130832116-3a32777b-4ad9-4686-a4e6-6d44baa146b4.png) The only issue here is that the attribution is not styled in exactly the same way as a MyST attribution. ### Tasks to complete So I guess next steps might be - [ ] Document the ability to make other special content blocks with HTML - [ ] Make it so that the HTML for margin content, epigraphs, etc. can translate exactly the same as MyST versions Answers: username_1: Hey! Thanks for digging into this. I didn't realize that you could use HTML to include other admonitions either haha. Agreed it would be cool to document this. For your second question, can you explain a bit further? I'm not quite sure what you mean about "translate the same as MyST versions" username_2: @username_0 just means adding extra cases to this module username_2: I would note although this is possible, it is being added to the document as raw html, rather than with html_admonitions which are getting specifically translated to admonition directives, and so e.g. will render correctly in PDF outputs (raw html would just be stripped) username_0: </div> ``` it is readable in the notebook, and it also renders when I build a PDF of my whole book, and when I do individual PDF downloads for page. <img width="993" alt="Screen Shot 2021-08-29 at 10 29 05 PM" src="https://user-images.githubusercontent.com/20325102/131281246-d88d8f6b-2d5f-4375-aaba-b7b4f4685ba0.png"> Same when I use an HTML div for an epigraph: ``` <blockquote class="epigraph" style=" padding: 10px"> This is the meaning of our liberty and our creed; why men and women and children of every race and every faith can join in celebration across this magnificent Mall,... <p class ="attribution">—Barack Obama, Inaugural Presidential Address, January 2009</p> </blockquote> ``` it renders correctly in the PDF as a blockquote (though it's not exactly the same style as the [MyST epigraph](https://jupyterbook.org/content/content-blocks.html?highlight=epigraph#epigraphs)) <img width="1032" alt="Screen Shot 2021-08-29 at 11 02 37 PM" src="https://user-images.githubusercontent.com/20325102/131292601-fd526c14-9ef3-464c-a1c9-e734fadd47bd.png">
VIPriors/vipriors-challenges-toolkit
990728963
Title: There are too much errors in the dataset of instance segmentation! Question: username_0: As title. Plaease check the dataset of instance segmentation, there are too much "human" instance with extreme small masks. And the val sever is also need to be checked. Thanks. Answers: username_1: Hello, I'm surprised by this issue! Those annotations were meticulously made by 2 reliable annotators. It's probably not exempt of mistakes but your message sounds like they are many errors :o Please note that we annotated the _visible_ pixels of each human, we didn't guess where human were occluded. Could you give 2 examples of bad annotations please? We could correct the segmentation masks if they are significant mistakes. username_1: Also, occluded masks can be split in multiple unconnected areas. You should refer to the instance ID in each pixel to recover the set of pixels that belong to that specfic ID. username_0: Yes, thank you. I have uploded some images with bboxes. Besides, these errors can be found directly by check the json file, where many bboxs' "area" is smaller than 100 while the image size is larger than 1000*2000. And almost every images have the same error. ![KS-FR-BLOIS_24330_1513710808980_1](https://user-images.githubusercontent.com/32918497/132523846-502a405f-8028-4f4b-a679-d7fc1125448f.jpg) username_1: Here is the ground truth associated to the image you shared (plus code to help visualization) ``` mask[mask>3000] = 0 # remove the ball mask mask[mask>1000] = mask[mask>1000]-1000 # set player IDs offset to 0 mask = mask/np.max(mask) # normalize ids between 0 an 1 ``` ![image](https://user-images.githubusercontent.com/18050620/132684227-0169e157-a39b-413e-bfdd-945074bbac4d.png) username_1: The problem lies either in the provided piece of code that converts the ground-truth file into coco format (@username_2), or in your code somewhere, but the raw (png) files seem correct. username_2: Thanks for raising this issue, there is indeed a bug in the script that generated the bounding boxes. The masks should still be fine though, so this shouldn't affect training. Will upload the fixed .json files asap. username_3: could you pls notify me when the modified .json file is uploaded? Status: Issue closed username_2: The dataset is updated with the correct labels and can be downloaded here: https://surfdrive.surf.nl/files/index.php/s/I5ooM79qSMtpt86. The new labels are also uploaded to the evaluation server although this issue should not affect any scores as bounding boxes are not used for evaluation. Please let me know if you still experience any issues with the dataset.
boxed/mutmut
511028232
Title: Better Results Handling Question: username_0: I've been using mutmut a bit, and I think it would be very useful to have some amount of management commands for handling mutations. The commands I'm thinking of are: `mutmut pop <id>` which would apply a mutant and remove it from the results database. `mutmut remove <id>` which would remove a mutant from the results database. It might also be helpful to have a `mutmut audit` which would allow you to go through mutations one by one and drop/keep them. I think this would be very useful with managing false-positives. Answers: username_1: Interesting suggestions. This implies a way of working that is very different from how I work! I do support `mutmut run <id>`. I find that often when I think I've killed a mutation I haven't actually so I think I would abuse "pop". As for removing false positives, I would love as many examples as you can find in the issue dealing with whitelisting. I would prefer if we can not produce false positives or have a whitelisting system that can handle them when they happen.
MongoEngine/mongoengine
210380145
Title: Stop using deprecated collection methods for PyMongo v3.x Question: username_0: http://api.mongodb.com/python/3.4.0/api/pymongo/collection.html shows the following methods as deprecated: `insert`, `save`, `update`, `remove`, `find_and_modify`, `ensure_index`. We still use those in our codebase. We should appropriately utilize the new `insert_one`, `insert_many`, `replace_one`, `update_one`, `update_many`, `delete_one`, `delete_many`, `find_one_and_delete`, `find_one_and_replace`, and `find_one_and_update`. This might be a very tough project, because we'll have to conditionally support both the old collection methods for PyMongo v2.7-2.9 and new collection methods for v3.x, and ideally maintain near-identical behavior from the MongoEngine user's point of view. Answers: username_0: #1120 already attempted to tackle `remove` from the list above. username_1: Is this possible to use mongoengine in any way to avoid that deprecation warnings? username_2: Mark username_3: Has there been any progress? Is it an option to drop support for PyMongo < 3? Version 3.0 has been release 2015-04. This would make the task much simpler. username_4: I would agree with the last comment. Today it is not possible to update MongoDB to 3.2 or 3.4 if you heavily rely on Mongoengine, which leads to more and more security/performance debt. username_3: Would people be interested in a PR, which drops support for PyMongo < 3? username_5: I use [this routine](https://github.com/username_5/username_5.mongodb/blob/0836168a5f66961b2aff2caf600edc050ce24fab/username_5/mongodb/compat.py#L6-L17) to implement the save logic using the recommended methods. username_6: Any updates on this? username_7: Used `insert_one` and `replace_one` instead of `save` username_8: Hi all, Mongoengine keeps throwing a deprecation warning for for the **save** method with the following versions of Mongoengine and Pymongo, respectively: ``` mongoengine 0.16.3 more-itertools 5.0.0 pip 19.0.2 pluggy 0.8.1 py 1.7.0 pymongo 3.7.2 ``` ``` lib/python3.6/site-packages/mongoengine/document.py:503: DeprecationWarning: update is deprecated. Use replace_one, update_one or update_many instead. upsert=upsert, **write_concern) ``` Not sure where you guys are at with this. Thanks. username_9: 1 ~~~ it's probably not so good for the README code to create DeprecationWarnings If this isn't a duplicate I can make a PR. username_10: Hi @username_9, its worth fixing, feel free to open a PR username_11: I would like to know if I should continue using `save`, `update` and other deprecated functions. Or I should use only the new ones mentioned? The question arises because whole Documentation (tutorial and API) uses the deprecated ones and I don't find a single example of how to use the new ones, username_10: Unless you have a problem with the deprecation warning, I'd advise you to continue using save, etc. Pymongo hasn't dropped the support so its relatively harmless. MongoEngine will do the switch as some point. Note that the warning issued by .count is addressed in an existing MR
pytorch/pytorch
731574527
Title: inconsistent results by tensor.min on different machines Question: username_0: ## 🐛 Bug torch.Tensor.min returns different results for the indices on the same tensor on different machines.. This hinders the replicability of experiments. ## To Reproduce Steps to reproduce the behavior: import torch t = torch.tensor([[41., 16., 26., 49., 7., 6., 0., 8., 76., 31., 54., 48., 28., 73., 27., 21., 13., 23., 20., 64., 45., 58., 3., 15., 43., 38., 1., 29., 24., 33., 22., 19.], [40., 15., 25., 48., 6., 5., 1., 7., 75., 30., 53., 47., 27., 72., 26., 20., 12., 22., 19., 63., 44., 57., 2., 14., 42., 37., 0., 28., 23., 32., 21., 18.], [39., 14., 24., 47., 5., 4., 2., 6., 74., 29., 52., 46., 26., 71., 25., 19., 11., 21., 18., 62., 43., 56., 1., 13., 41., 36., 1., 27., 22., 31., 20., 17.], [38., 13., 23., 46., 4., 3., 3., 5., 73., 28., 51., 45., 25., 70., 24., 18., 10., 20., 17., 61., 42., 55., 0., 12., 40., 35., 2., 26., 21., 30., 19., 16.], [37., 12., 22., 45., 3., 2., 4., 4., 72., 27., 50., 44., 24., 69., 23., 17., 9., 19., 16., 60., 41., 54., 1., 11., 39., 34., 3., 25., 20., 29., 18., 15.], [36., 11., 21., 44., 2., 1., 5., 3., 71., 26., 49., 43., 23., 68., 22., 16., 8., 18., 15., 59., 40., 53., 2., 10., 38., 33., 4., 24., 19., 28., 17., 14.], [35., 10., 20., 43., 1., 0., 6., 2., 70., 25., 48., 42., 22., 67., 21., 15., 7., 17., 14., 58., 39., 52., 3., 9., 37., 32., 5., 23., 18., 27., 16., 13.], [35., 60., 50., 27., 69., 70., 76., 68., 0., 45., 22., 28., 48., 3., 49., 55., 63., 53., 56., 12., 31., 18., 73., 61., 33., 38., 75., 47., 52., 43., 54., 57.]]) print(t.min(dim=-1)) On Machine 1 (see below) this produces: torch.return_types.min( values=tensor([0., 0., 1., 0., 1., 1., 0., 0.]), indices=tensor([ 6, 26, 26, 22, 22, 5, 5, 8])) while on Machine 2 (see below), it produces: torch.return_types.min( values=tensor([0., 0., 1., 0., 1., 1., 0., 0.]), indices=tensor([ 6, 26, 22, 22, 22, 5, 5, 8])) ## Expected behavior Same results on all architectures and versions. ## Environment ### Machine 1 PyTorch version: 1.5.0 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A [Truncated] Is CUDA available: True CUDA runtime version: 10.2.89 GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB GPU 1: Tesla V100-SXM2-16GB GPU 2: Tesla V100-SXM2-16GB GPU 3: Tesla V100-SXM2-16GB Nvidia driver version: 440.64.00 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.19.2 [pip3] torch==1.6.0a0+b31f58d [pip3] torchvision==0.7.0a0+78ed10c [conda] Could not collect ## Additional context Answers: username_1: Dimensional min is not deterministic, but you can try to make it so by setting number of threads to one. However it is not documented and not guaranteed to be supported. I'm leaving this issue open as I think we need to make docs more clear about it. username_2: The docs linked are for PyTorch 1.7 but you're using PyTorch 1.5. The PyTorch 1.5 docs describe the behavior you're seeing: https://pytorch.org/docs/1.5.0/torch.html?highlight=min#torch.min. If you update to PyTorch 1.7 I believe you'll get the behavior you expect. username_0: @username_2 Thanks, I had not noticed that. username_2: Please reopen this issue if you're still seeing unexpected behavior on PyTorch 1.7. Status: Issue closed
fossasia/pslab-hardware
684025590
Title: Digital inputs are mislabeled on bottom Question: username_0: The digital inputs (LA1-4) are mirrored on the bottom of the board. The pin labeled LA1 is actually LA4, LA2 is LA3, LA3 is LA2, and LA4 is LA1. ![](https://i.imgur.com/3yMs9DO.jpg) ![](https://i.imgur.com/4ZMKe5i.jpg) Answers: username_1: This pin set has been removed in the new version to give space for RTC battery. So I guess we can close this issue. Status: Issue closed
opencloset/OpenCloset-Cron-SMS
213986772
Title: 연장/연체비용 미납자 문자를 발송 Question: username_0: https://github.com/opencloset/opencloset/issues/1189 <변경되는 연장/연체비용 미납자 관리> 1번 - 최초에 대여자의 미납금 확인 시, 옷장지기가 수동으로 미납문자 발송 **_2번 - 3일 / 7일 경과된 날에 미납 상태가 있으면 대여자에게 자동으로 문자를 발송_** **_3번 - 2주일 경과된 날에 미납 상태가 남아있으면 불납으로 상태를 자동 변경_** 4번 - 4층 담당 옷장지기는 적어도 이틀에 한번씩은 미납금 입금여부 확인이 필요함 2번 3번 의 프로세스를 요청드립니다. ---------------------- 2번의 경우에 발송되는 문자 내용은 다음과 같습니다. `[열린옷장] ooo님, 대여연장 혹은 반납연체로 발생된 미납 금액 00,000원이 아직 입금되지 않았습니다. 금일 내로 지정계좌에 입금 요청드립니다. 국민은행 205737-04-003013, 예금주: 사단법인 열린옷장` Answers: username_0: 연장/연체비를 계산할 수 있는 helper 가 있습니다. context 문제로 helper 를 사용할 수 없기 때문에 모듈화 해서 이를 재사용하도록 하는 것이 바람직 하겠습니다. late_fee: 연장+연체비 overdue_fee: 연체비 extension_fee: 연장비 용어를 이렇게 사용하고 있습니다. `OpenCloset::Calculator::LateFee` 와 같은 모듈을 만들어서 연장, 연체, 연장+연체비를 구할 수 있는 메소드를 제공하면 어떨까 합니다. username_0: https://github.com/opencloset/OpenCloset-Calculator-LateFee 을 만들었습니다. username_0: 정확한 테스트를 위해 특정 주문서의 반납일, 반납예정일, 반납희망일을 가지고 연체/연장비 계산을 해봐야 합니다. username_0: https://github.com/opencloset/opencloset/issues/1167 이슈도 같이 해결할 수 있다면 좋을 것 같습니다. username_0: 주문서번호 `53955` 로 테스트해봤습니다. 반납일 `Wed, 15 Mar 2017 05:48:20 +0900` 기준으로 연장료: 5,400 ( 1일 x 27,000 x 20% ) 연체료: 64,800 ( 8일 x 27,000 x 30% ) 합계: 70,200 입니다. ``` perl use strict; use warnings; use OpenCloset::Schema; use OpenCloset::Calculator::LateFee; my $db = { ... }; my $schema = OpenCloset::Schema->connect( { dsn => $db->{dsn}, user => $db->{user}, password => $db->{pass}, %{ $db->{opts} }, } ); my $order = $schema->resultset('Order')->find( { id => 53955 } ); die "Not found order" unless $order; my $calc = OpenCloset::Calculator::LateFee->new; print $calc->_overdue_days($order), "\n"; print $calc->overdue_fee($order), "\n"; print $calc->_extension_days($order), "\n"; print $calc->extension_fee($order), "\n"; print $calc->late_fee($order), "\n"; __DATA__ 8 (연체일) 64800 (연체료) 1 (연장일) 5400 (연장료) 70200 (합계) ``` username_0: 테스트 결과가 만족스럽습니다. 모듈배포하고 이를 사용하면 될 것 같습니다. username_0: [OpenCloset-Calculator-LateFee-v0.0.1.tar.gz](https://cpan.theopencloset.net/[email protected]/OpenCloset-Calculator-LateFee-v0.0.1.tar.gz) 배포하였습니다. username_0: 연체중: 상태가 대여중이면서 반납희망일이 지난 경우 오늘: 2017-03-15 희망: 2017-03-14 (1일 연체중) 연장중: 상태가 대여중이면서 반납예정일은 지났지만 반납희망일은 지나지 않은 경우 오늘: 2017-03-15 예정: 2017-03-14 희망: 2017-03-16 (2일 연장했고, 1일 연장중) 연체이면서 연장일 수 있습니다. 목록의 중복을 막아야 합니다. username_0: 용어를 정리할 필요가 있겠습니다. 미납: 반납된 주문서의 연체료(연장료포함) 혹은 배상비를 지불하지 않은 것 연장: 대여중인 주문서중에 반납예정일이 지난 것 연체: 대여중인 주문서중에 반납희망일이 지난 것 ---------------------------- @username_1 문자메세지를 보내야할 target 을 모르겠습니다. username_0: 타겟은 미납자입니다. 반납이후 3일/7일째 미납상태로 남아있으면 자동으로 문자발송 username_0: ``` perl use strict; use warnings; use OpenCloset::Schema; use OpenCloset::Calculator::LateFee; my $db = { ... }; my $schema = OpenCloset::Schema->connect( { dsn => $db->{dsn}, user => $db->{user}, password => $db->{pass}, %{ $db->{opts} }, } ); my $order = $schema->resultset('Order')->find( { id => 41392 } ); die "Not found order" unless $order; my $calc = OpenCloset::Calculator::LateFee->new; printf "연체일: %d\n", $calc->overdue_days($order); printf "연체료: %d\n", $calc->overdue_fee($order); printf "연장일: %d\n", $calc->extension_days($order); printf "연장료: %d\n", $calc->extension_fee($order); printf "연장료+연체료: %d\n", $calc->late_fee($order); __DATA__ 연체일: 0 연체료: 0 연장일: 1 연장료: 6400 연장료+연체료: 6400 ``` username_0: ``` # ignore_status 옵션을 넣어서 돌리면, my $calc = OpenCloset::Calculator::LateFee->new( ignore_status => 1 ); 연체일: 1 연체료: 9600 연장일: 1 연장료: 6400 연장료+연체료: 16000 ``` username_0: @username_1 매일 몇시에 발송되어야 할까요? 아래 시간대에는 이미 예약된 작업이 있으므로 중복된 시간은 피하면 좋겠습니다. - 08:00 - 11:30 - 11:35 - 11:40 - 11:45 - 14:20 - 19:00 username_1: 10:30분으로 요청드립니다. username_0: @username_1 공휴일에도 보낼까요? username_0: 지정계좌에 입금되고 나면 staff 이 완납으로 변경하는 것인가요? username_1: @username_0 생각을 못했는데, 공휴일에는 저희가 대응을 할 수 없어서 공휴일을 제외하고 발송했으면 좋겠습니다. 그리고 가상계좌 도입 전에는 옷장지기가 완납으로 변경합니다. username_0: 미납에 관련된 cron 패키지를 새로 만들고 관리하도록 하겠습니다. username_0: https://github.com/opencloset/OpenCloset-Cron-Unpaid 를 만들었습니다. Status: Issue closed username_1: 확인 하였습니다. 감사합니다. ^^
Azure/azure-iot-sdk-node
240816368
Title: azure-iot-common/lib/results.d.ts compilation error with TypeScript 2.4 Question: username_0: <!-- Hi there! thank you for discovering and submitting an issue! Please first tell us a little bit about the environment you're running: The commands in the comments can be run directly in a command prompt. --> # Context - **OS and version used:** OS X Sierra - **Node.js version:** 7.7.3 - **npm version:** 4.1.2 - **list of installed packages:** azure-iot-common - **cloned repo:** private repo # Description of the issue: azure-iot-common/lib/results.d.ts(15,22): error TS2559: Type 'MessageEnqueued' has no properties in common with type 'Result'. ... Same error for all derived classes in results.d.ts Answers: username_1: Fix checked in. https://github.com/Azure/azure-iot-sdk-node/pull/57 Status: Issue closed
angular/angular
539967030
Title: Unhelpful error message: TypeError: Cannot read property '1' of null Question: username_0: <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 Oh hi there! 😄 To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅--> # 🐞 bug report ### Affected Package Ivy rendering with a directive, unclear error message ### Is this a regression? Yes, it appear to be, did not get any errors like this in Angular 7 ### Description I'm trying to get our companies application ported to Angular 9 from Angular 7. I've already made many fixes, and came across this one: ``` app.module.ts:177 [ERROR]: TypeError: Cannot read property '1' of null at providersResolver (di_setup.ts:46) at Object.definition.providersResolver (providers_feature.ts:48) at resolveDirectives (core.js:17) at elementStartFirstCreatePass (core.js:19023) at ɵɵelementStart (core.js:19023) at Module.ɵɵelement (core.js:19023) at NWTRecordingControlsNWTRecordingControlsDetailComponent_div_53_div_1_div_4_div_3_div_1_span_1_div_14_Template (nwtrecordingcontrolsnwtrecordingcontrols-detail.component.html:784) at executeTemplate (core.js:17) at renderView (core.js:17) at TemplateRef_.createEmbeddedView (core.js:17) ``` I'm sure there is a bug on our side, but wanted to report this as the error message is extremely unhelpful. The template code that is failing is: ``` <div [froalaEditor]="frOptions" formControlName="NWTTextEditor" id="NWTTextEditor" [NwRecordingValueDirective]="{ actionType: 'setValue', configuration: { dataItem: 'NWTTextEditor'}, actions: ['formChange'] }"> </div> ``` Inside of this function: ``` export function providersResolver<T>( def: DirectiveDef<T>, providers: Provider[], viewProviders: Provider[]): void { const lView = getLView(); const tView: TView = lView[TVIEW]; if (tView.firstCreatePass) { const isComponent = isComponentDef(def); // The list of view providers is processed first, and the flags are updated resolveProvider(viewProviders, tView.data, tView.blueprint, isComponent, true); // Then, the list of providers is processed, and the flags are updated resolveProvider(providers, tView.data, tView.blueprint, isComponent, false); [Truncated] @angular-devkit/build-ng-packagr 0.900.0-rc.5 @angular-devkit/build-optimizer 0.900.0-rc.5 @angular-devkit/build-webpack 0.900.0-rc.5 @angular-devkit/core 9.0.0-rc.5 @angular-devkit/schematics 9.0.0-rc.5 @angular/cdk 9.0.0-rc.4 @angular/cli 9.0.0-rc.5 @ngtools/webpack 9.0.0-rc.5 @schematics/angular 9.0.0-rc.5 @schematics/update 0.900.0-rc.5 ng-packagr 9.0.0-rc.2 rxjs 6.5.3 typescript 3.6.4 webpack 4.41.2 </code></pre> **Anything else relevant?** <!-- ✍️Is this a browser specific issue? If so, please specify the browser and version. --> <!-- ✍️Do any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. --> Answers: username_0: Just found the cause and the latest updates reflect the issue. When a local library is imported by NPM via a relative path, the Ivy compiler does not work correctly. username_0: Although not the same issue, https://github.com/angular/angular/issues/33395 also is referencing issues with ngcc and relative directories. username_0: Work-arounds for anyone else having this problem 1. Use `npm pack` and install the `tgz` file in your `package.json` as shown above 2. Turn on Ivy in your `tsconfig.json` for your library ``` "angularCompilerOptions": { ... "enableIvy": true } ``` 3. Manually run `ngcc` with `--target` pointing to the `dist` directory of your library after building it. username_1: Same error when `yarn link`ing to an npm package, from an app using Ivy. username_2: I had something very similar (using `npm link`) and the my solution was to add paths to `tsconfig.app.json`: ``` "paths": { "@angular/*": [ "./node_modules/@angular/*" ], ``` username_3: I resolved similar issue with following fix in angular.json of project: ``` "build": { "builder": "@angular-devkit/build-angular:browser", "options": { "vendorSourceMap": true, "preserveSymlinks": true, ``` Status: Issue closed username_4: ngcc does not follow symlinks so you should not expect Angular libraries linked via `npm link` into an Ivy application project to work. The workaround is to follow @username_0 's comment here: https://github.com/angular/angular/issues/34478#issuecomment-567579719
fsek/web
64590525
Title: Remove old tables in Production database Question: username_0: In ```schema.rb``` https://github.com/fsek/web/blob/master/db/schema.rb there is a lot of old tables which is not in use anymore: - ```email_accounts``` - ```emails``` - ```lists``` - ```phrasing_phrase_version``` - ```phrasing_phrases``` - ```roles``` (when Cancancan is implemented, see: #128 Should we remove this or not? Answers: username_1: Yes, if they are not used they should be dropped. username_0: Can you fix this? Not sure what we decide about ```role```. username_1: I can but I'm a bit busy now and will be for a week. It's not hard to do if you have time on your hands. username_2: *It's not hard to do if you have time on your hands.* - @username_1 No but you really need to know what you are doing :) username_1: @username_2 We do make regular backups don't we :) But sure, we are not in a hurry at all, I can do it in a week or so. username_1: It seems someone has removed them now so I'm closing this. Status: Issue closed
single-cell-genetics/vireo
1172190917
Title: Vireo taking ages to run Question: username_0: Hi, I have some single-cell RNA-seq data for which I don't have genotype information. I ran cellSNP-lite on a merged BAM file containing all of the donors to genotype the single cells as follows: ``` cellsnp-lite -s data.dir/merged.bam -b data.dir/uniq_barcodes.tsv -O results.dir/merged.dir -R vcf/genome1K.phase3.SNP_AF5e2.chr1toX.hg38.vcf.gz --genotype --minCOUNT 10 --minMAF 0.1 -p 10 ``` I am now running Vireo as follows: ``` vireo -c results.dir/merged.dir -N 4 -o results/merged.dir --randSeed=3245 -p 30 ``` However, it has been running for three days and still hasn't finished. I have spoken to others who have used Vireo and they mentioned that it was fast, so I'm not sure if I'm doing something wrong? This is the log message so far: ``` [vireo] Loading cell folder ... [vireo] Demultiplex 41622 cells to 4 donors with 104779 variants. ``` Many thanks for the help. Best wishes, Lucy Answers: username_0: This is the log for cellSNP-lite in case that helps. ``` [I::main] start time: 2022-03-07 10:57:55 [W::check_args] Max depth set to maximum value (2147483647) [I::main] loading the VCF file for given SNPs ... [I::main] fetching 7352497 candidate variants ... [I::main] mode 1a: fetch given SNPs in 41622 single cells. [I::csp_fetch_core][Thread-2] 2.00% SNPs processed. [I::csp_fetch_core][Thread-3] 2.00% SNPs processed. [I::csp_fetch_core][Thread-5] 2.00% SNPs processed. ... [I::csp_fetch_core][Thread-9] 90.00% SNPs processed. [I::csp_fetch_core][Thread-9] 92.00% SNPs processed. [I::csp_fetch_core][Thread-9] 94.00% SNPs processed. [I::csp_fetch_core][Thread-9] 96.00% SNPs processed. [I::csp_fetch_core][Thread-9] 98.00% SNPs processed. [I::main] All Done! [I::main] end time: 2022-03-08 10:09:17 [I::main] time spent: 83482 seconds. ``` username_1: Hi Lucy, Thanks for the issue. Your dataset indeed looks relatively large. I wonder if the memory is a bottleneck. You check the memory usage by `free -h`. If it is the case, you can change your command line to `-p 1` by only using one CPU. Another is that you may set a more stringent cutoff on `--minCOUNT`, e.g., with 30 or 100 in cellsnp. It looks you already have much more than enough variants. Probably, this is not the fastest strategy to sort it out, as you need to re-run cellsnp. Yuanhua username_0: Hi @username_1, Thank you for the quick response. I am running the command on a large compute cluster but maybe I didn't specify enough memory. How much memory would you recommend specifying? Why do you suggest to use only one CPU (`-p 1`)? Would using more CPUs not make it quicker? If this does not work, I will try increasing the `--minCOUNT` threshold for `cellSNP`. Best wishes, Lucy username_1: I see. Probably you could start with specifying 50GB memory. I guess it won't use more than 100GB. Another major factor for memory usage is the n_CPUs it uses, as n copies for data will be used, one for each sub-processor. So you may use `-p 4` as a safer start instead of 30. Yuanhua
bootstrap-vue/bootstrap-vue
272997184
Title: Bootstrap Dropdown XS (suggestion) Question: username_0: Is it possible to get this dropdown any smaller? It doesn't fit our layout at the moment. Thanks! ![selection_099](https://user-images.githubusercontent.com/6416323/32669431-c3aaddb6-c5fd-11e7-8208-ecad89aa87f9.png) ``` b-table.text-nowrap(striped,hover,small,responsive,head-variant='dark',foot-variant='dark',:items='current_orders',:fields='fields',:current-page='1',:per-page='limit',:sort-by='sortBy',:sort-desc='sortDesc',:no-local-sorting='true',@sort-changed='sort') //- Checkbox template(slot='HEAD_is_selected') b-checkbox(v-model='select_all',value='1',unchecked-value='0') template(slot='is_selected',slot-scope='data') b-checkbox(v-model='data.value',value='1',unchecked-value='0') //- Order Status template(slot='HEAD_status_id') status-menu template(slot='status_id',slot-scope='data') status(:order='data.item') //- Order Options template(slot='HEAD_id',slot-scope='data') options(:order='data.item') b-pagination.float-right(size='sm',:total-rows='orders.length',v-model='page',:per-page='limit') ``` ``` b-button-toolbar(key-nav) b-dropdown.mx-1.my-0(size='sm',right,text='All',variant='secondary') b-dropdown-item-button.py-0.px-1 All b-dropdown-item-button.py-0.px-1 Active b-dropdown-item-button.py-0.px-1 Pending b-dropdown-item-button.py-0.px-1 Booking b-dropdown-item-button.py-0.px-1 Hold b-dropdown-item-button.py-0.px-1 Booked b-dropdown-item-button.py-0.px-1 In-Transit b-dropdown-item-button.py-0.px-1 Rejected b-dropdown-item-button.py-0.px-1 Delivered b-dropdown-item-button.py-0.px-1 Inv. Approval b-dropdown-item-button.py-0.px-1 Ready to Inv. b-dropdown-item-button.py-0.px-1 Invoiced b-dropdown-item-button.py-0.px-1 Paid in Full b-dropdown-item-button.py-0.px-1 Cancelled b-dropdown.mx-1.text-nowrap(size='sm',right,text='Type',variant='primary') div input(type='checkbox',style='margin-left:10px') span &nbsp;&nbsp; Select All div(v-for='order_type in order_types',) input(type='checkbox',style='margin-left:10px') span &nbsp;&nbsp; {{ order_type.prefix }} - {{ order_type.name }} b-dropdown.mx-1(size='sm',right,text='Comm',variant='primary') b-dropdown.mx-1(size='sm',right,text='Sale',variant='primary') ``` Answers: username_1: You can set the dropdown button size via the `size` prop (accepted values are `sm` and `lg` as currently defined in Bootstrap V4 CSS). These values are defined by Bootstrap V4 as CSS as classes `.btn-sm` and `.btn-lg`. You could create your own `.btn-xs` size (and pass `xs` to the `size` prop) which sets the font-size, padding, etc. username_1: Check out https://github.com/twbs/bootstrap/issues/21881#issuecomment-341972830 username_0: @username_1 thanks so much man! Status: Issue closed
devdaydresden/devday_pwa
519248971
Title: Homepage Question: username_0: We need a homepage which displays links to profile, locationinformations, notifications when there are notifications available, the upcoming sessions (next active sessionslot) and links to other pages. Components: - Iconlink Component - Notification Component - Card Component - Sessiongrid component
dwaaan/HRConvert2-Docker
856897111
Title: Expose temporary folder where file are uploaded Question: username_0: Dear Dev, Is it possible to expose as a volume the folder where the files are uploaded please? Thanks Answers: username_1: Hello, It is possible using mount binds. `mount -t none -o bind /home/converter /new/mount/location` https://superuser.com/questions/663213/two-distinct-mount-points-with-one-device username_2: Hi @username_1, assuming files are uploaded into a directory that is not used by anything else, I can add this as a volume in the docker-compose file, let me know
w3c/csswg-drafts
312173182
Title: [cssom-1] Serialization for declaration block with variable reference in shorthand may not be useful Question: username_0: The serialization for declaration block with variable reference in shorthand while some longhand is set differently can lead to undesired result, i.e. it would produce non-idempotent result. For example, for a declaration block like ```css margin: var(--foo); margin-top: 10px; ``` in the current algorithm would be serialized to something like ```css margin-right: ; margin-bottom: ; margin-left: ; margin-top: 10px; ``` which apparently doesn't have the same meaning as before. This is unfortunate. We should probably change the serialization algorithm to that, when we find a longhand which is expanded from some shorthand with variable reference, the shorthand reference is always serialized. If there are longhands that have been changed from the expansion of shorthand (like `margin-top` in the case above), the longhands are not added to *already serialized*, so that their value is still preserved and will be serialized later. The above algorithm should works fine with declaration block from parsing, since the parsing result uses specified order, and thus any update to longhand which was expanded from a shorthand would always come after the shorthand. However, there is currently some problem with CSSOM setters, because in the spec they still set in place rather than append. For that, we've resolved in [#1898 (comment)](https://github.com/w3c/csswg-drafts/issues/1898#issuecomment-342556321) that setters should append rather than set in place, so this is probably not a problem anyway. We may still want that change to happen first, though. cc @csnardi Answers: username_0: It becomes complicated again that #2924 makes it not append, but just has some constraints... username_1: Commenting to not that legacy shorthands (which serialize to the non-legacy shorthands that replaced them) are a subset of this issue, but are handled slightly differently in CSSOM, so we should make sure they're solved properly at the same time as this.
dotnet/arcade
377980039
Title: Non-SDK projects that import arcade targets fail to determine an SDK version Question: username_0: A project that does not use the NET SDK (e.g. some corefx projects), but use some of the arcade targets tend to eventually import Versions.targets and fail because of https://github.com/dotnet/arcade/pull/1234 @username_1 I'm assigning this to you. We probably can't require the Net sdk be used in each and every project that uses any of arcade. Maybe this check should be a conditional based on whether the NetCoreSDKVersion property is empty or not? Answers: username_1: yup, that'd be the right fix. username_2: To check I understand this issue. Currently nothing is broken since @username_0 reverted the change and the purpose of this issue is to have in mind that when we check for the required .Net version there are projects that don't use it? username_1: We need to put the (corrected) check back. Let me do that. Status: Issue closed
PagerDuty/pdjs
849320674
Title: delete response with response lacking response body throws exception Question: username_0: Hi Pagerduty, When you make a `DELETE` call to `/services/<service-id>` the default response will be a `204` without a response body and the code will try to process the response through a `.json()` call on https://github.com/PagerDuty/pdjs/blob/main/src/api.ts#L100 and throw hitting the `.catch` even on a valid response, thereby rejecting the promise. library version: `2.2.1` Please let me know if I can provide additional context. Answers: username_1: Hey @username_0 , thanks for the report we'll have a look into this in the next few weeks. If you have time to attempt a fix yourself that would be super helpful. Thanks! Status: Issue closed
jishi/node-sonos-http-api
119301054
Title: presets.json Question: username_0: Hi I have a question, not an issue. Don't know if this is the right place, but did not find any other possibility to ask.... I would like to 1* buffer / remember active state of all my rooms/players 2* put all players on mute 3* play 1 sound in one of my rooms 4* go back to starting state, the one buffered at #1 Would this be possible? thanks a lot! Answers: username_1: Hm... well, sure, anything is possible. Basically you want to do what the sayAll command does, but only for one speaker? This can be a bit tricky, I haven't yet perfected this since you need to do a lot of waiting for the correct state of your players to apply, before you continue on each step. This make the success a bit intermittent. Does this answer your question? username_0: thanks username_1 let me put the question different How can I create one single script, doing the following for 1 player 1. buffer current state 2. play a sound or a song or one of the presets 3. go back to initial state (as buffered at the start at line #1) Would this be possible? username_1: I have tried to do something similar with the say and sayAll commands, but there are some timing issues that I haven't been able to resolve yet. You can look at that for inspiration. If you can live with a non-exact return to the buffered current state, it is fairly simple to achieve since the preset can both take an uri to play, as well as track no in queue and elapsed time (IIRC). username_0: ok, now I understand you answer, thanks for that! My challenge - I am not able to write JS code. Could you help? I can live very well with a non-exact return to the buffered current state, no problem with that Also, taking only one player into scope would be enough. No worries about the other players. Could you give me some support on this? username_1: Unfortunately I must put my efforts on improving other stuff on this. I'm closing this issue since I can't give you more info than this. I will probably implement the possibility to play custom mp3-files that might be what you are looking for, follow issue #136 if you want to keep track of status. Status: Issue closed
multiformats/multicodec
578092244
Title: Automated PR Response Question: username_0: We should have an automated PR response that suggests avoiding the 0-127 range. Answers: username_1: This is a pretty easy GitHub Action. Probably can be done with grep and awk, just pipe the git diff between the head of the PR and master.
reactor/reactor-core
420797950
Title: bufferTimeout overflows when sink produces requested amount of data Question: username_0: ### Expected behavior Calling `sink.next` `sink.requestedFromDownstream()` times should never result in an error. ### Actual behavior A combination of delayed producer, multiple, slow consumers, and bufferTimeout results in `reactor.core.Exceptions$OverflowException: Could not emit buffer due to lack of requests` (see steps.) ### Steps to reproduce JUnit test case: ``` @Test void bufferAllowsRequested() throws InterruptedException { ExecutorService workers = Executors.newFixedThreadPool(4); AtomicBoolean down = new AtomicBoolean(); Flux.create(sink -> { produceRequestedTo(down, sink); }).bufferTimeout(400, Duration.ofMillis(200)) .doOnError(t -> { t.printStackTrace(); down.set(true); }) .publishOn(Schedulers.fromExecutor(workers), 4) .subscribe(this::processBuffer); Thread.sleep(3500); workers.shutdownNow(); assertFalse(down.get()); } private void processBuffer(List<Object> buf) { System.out.println("Received " + buf.size()); try { Thread.sleep(400); } catch (InterruptedException e) { e.printStackTrace(); } } private void produceRequestedTo(AtomicBoolean down, FluxSink<Object> sink) { Thread thread = new Thread(() -> { while(!Thread.interrupted()) { try { if (sink.requestedFromDownstream() > 0) { System.out.println("Requested " + sink.requestedFromDownstream()); Thread.sleep(new Random().nextInt(1000)); IntStream.range(0, Math.min(1000, (int) sink.requestedFromDownstream())).forEach(sink::next); } else { Thread.sleep(200); } } catch (InterruptedException ignored) { break; } catch (Exception e) { e.printStackTrace(); down.set(true); [Truncated] thread.setDaemon(true); thread.start(); } ``` ### Reactor Core version 3.2.6 ### JVM version (e.g. `java -version`) 1.8.0_201 I *think* what is happening is that the slow producer is not quick enough, so bufferTimeout times out, produces an undersized buffer, which reduces upstream demand by 1 buffer, and thus we technically have overflow when exactly N * bufferSize items arrive. The use case is adapting any pull based source (Kafka, a database, etc.) where we can and do want to respect back-pressure, and have plenty of data to meet the request with. All the `onBackpresureBuffer` methods are inappropriate as the source we are pulling from has far more data than we can hold in memory, and data loss is not an option. IMO bufferTimeout should be prepared to buffer up to the amount requested before it's first timeout following a full buffer. This is reasonably bounded demand, and in practice ought not to result in more items residing in memory than were requested. Answers: username_0: The `Flux.create` can also be replaced with something like `Flux.generate(this::syncGenerateACollection).flatMap(Flux::fromIterable)` and produce the same effect. I cannot figure out how to inject any sort of bounded buffering that will prevent overflow. username_1: I have a simpler reproducer for this, not involving any explicit multithreading or slow consumers. ``` import reactor.core.publisher.Flux; import java.time.Duration; import java.util.stream.IntStream; public class Main { public static void main(String[] args) { poll() .bufferTimeout(10, Duration.ofMillis(100)) .flatMap(Flux::fromIterable, 1) .subscribe(x -> sleep(1)); } private static Flux<Integer> poll() { return Flux.create(sink -> { for (int i = 0; !sink.isCancelled(); i++) { int requested = (int) sink.requestedFromDownstream(); System.out.printf("Polling iteration %d, requested = %d%n", i, requested); if (i < 5) { sink.next(i); } else { IntStream.range(100 * i, 100 * i + requested).forEach(sink::next); } sleep(100); } }); } private static void sleep(long millis) { try { Thread.sleep(millis); } catch (InterruptedException e) { throw new RuntimeException(e); } } } ``` Result: ``` [ERROR] (parallel-1) Scheduler worker in group main failed with an uncaught exception - reactor.core.Exceptions$ErrorCallbackNotImplemented: reactor.core.Exceptions$OverflowException: Could not emit buffer due to lack of requests reactor.core.Exceptions$ErrorCallbackNotImplemented: reactor.core.Exceptions$OverflowException: Could not emit buffer due to lack of requests Caused by: reactor.core.Exceptions$OverflowException: Could not emit buffer due to lack of requests at reactor.core.Exceptions.failWithOverflow(Exceptions.java:215) at reactor.core.publisher.FluxBufferTimeout$BufferTimeoutSubscriber.flushCallback(FluxBufferTimeout.java:204) at reactor.core.publisher.FluxBufferTimeout$BufferTimeoutSubscriber.onNext(FluxBufferTimeout.java:254) at reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793) at reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718) at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153) at java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110) at java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:557) at Main.lambda$poll$1(Main.java:25) at reactor.core.publisher.FluxCreate.subscribe(FluxCreate.java:94) at reactor.core.publisher.FluxBufferTimeout.subscribe(FluxBufferTimeout.java:67) at reactor.core.publisher.FluxFlatMap.subscribe(FluxFlatMap.java:97) at reactor.core.publisher.Flux.subscribe(Flux.java:7799) at reactor.core.publisher.Flux.subscribeWith(Flux.java:7963) at reactor.core.publisher.Flux.subscribe(Flux.java:7792) at reactor.core.publisher.Flux.subscribe(Flux.java:7756) at reactor.core.publisher.Flux.subscribe(Flux.java:7699) at Main.main(Main.java:14) ``` I suspect that the problem is caused by two threads concurrently pushing the buffer downstream. One is the main thread where producer is executed. Another is the thread internally created by bufferTimeout() for flushing on timeout. username_2: Yes, I also faced this problem: [Prevent flux.bufferTimeout from overflowing after timeout](https://stackoverflow.com/questions/54151419/prevent-flux-buffertimeout-from-overflowing-after-timeout). The author offers a solution implementirovan own [BatchSubscriber](https://gist.github.com/hossomi/5edf60acb534a16c025e12e4e803d014#file-batchsubscriber-java) Can this be solved by regular means of Reactor? username_3: I am having this situation... I have a stream of Flux and Webclient joined together. I want to join both The Webclient Flux is timing out. Any suggestion into how to solve this situation. my code is at: https://github.com/username_3/mimecast-backend https://github.com/username_3/mimecast-frontend username_4: We ended up rolling our own BatchSubscriber based on the one @username_2 linked to. It isn't a very satisfying solution, but it did get us unblocked. I have done some digging into this issue, and discovered that the scenario we are running into is actually explicitly tested for in the unit tests (FluxBufferTimeoutTest::bufferWithTimeoutThrowingExceptionOnTimeOrSizeIfDownstreamDemandIsLow, https://github.com/reactor/reactor-core/blob/master/reactor-core/src/test/java/reactor/core/publisher/FluxBufferTimeoutTest.java). Probably the easiest 'fix' would be to document this behaviour. I'm not sure how you would succinctly describe the behaviour to people looking to use the function, and I suspect that for a lot of use cases this isn't desired. Actually fixing the issue is a bit of thornier problem. For one, there may be people relying on the existing behaviour, and it would probably be bad form to break things for them. And then there are the difficulties in actually implementing the fix. I think I have identified two aspects of FluxBufferTimeout that would need addressing: ### It is very greedy, to the extent that the backpressure doesn't really work When downstream requests n items with a call FluxBufferTimout, n * batchSize items are requested from upstream. This happens regardless of how many unfulfilled items have already been requested from upstream. For instance, consider a bufferTimeout with a batchSize of 10. FluxBufferTimout::request(1) results in 10 items requested from upstream. A single item is delivered before the timeout, leaving 9 outstanding. This one item is sent downstream, resulting in another call to FluxBufferTimout::request(1), which requests another 10 from upstream. There are now 19 outstanding requests. This is the scenario revealed in the logging performed by @username_1's example. I think this could probably be fixed without breaking anybody. My proposed solution would be to keep an internal record of how many items have been requested from upstream, and only request enough from upstream to bring the number outstanding up to (requested * batchSize) ### There would need to be internal buffering Consider the following scenario. FluxBufferTimout::request(5) is called with a batchSize of 10, resulting in 50 items being requested from upstream. 5 items are delivered, with the buffer timing out between each one. Downstream takes its time processing these five batches, and does not request any more. There are still 45 outstanding items requested from upstream, enough for four full batches and one partial one. When these items are delivered from upstream, the current behaviour is to error. The behaviour that we would like would be for those batches to be stored on an internal queue, with new requests from downstream being fulfilled from that queue. The scheduling of the flush on timeout would need to be disabled while that internal queue has items on it. I think this is probably the much more complicated change to implement, as well as breaking existing behaviour for those who rely on the current implementation username_4: I have raised a pull request that should address the greediness issue. If you are only requesting a single item at a time, then my fix should solve all of your problems here. Otherwise the change to add internal buffering would be required. Status: Issue closed username_6: this appears not be fixed when combining with reactor-kafka ( source) and webclient( sink) i am using the KafaReceiver::receive method. username_7: Hi @username_6, I just tried the following and it seems to be working: ```java sendMessages(0, 100_000); KafkaReceiver<Integer, String> receiver = createReceiver(); receiver.receive() .bufferTimeout(10, Duration.ofMillis(100)) .flatMap(Flux::fromIterable, 1) .take(100_000) .count() .log() .block(); ``` username_8: I have this error with reactor Kafka 1.2.2 and reactor core 3.3.7: ``` Flux.just(receiverOptions) .map(KafkaReceiver::create) .flatMap(kafkaReceiver -> kafkaReceiver.receive() .bufferTimeout(batchSize, Duration.ofMillis(50)) .concatMap(batch -> processBatch(kafkaReceiver, batch)) ).doOnError(err -> LOGGER.error("Unexpected error in consuming Kafka events", err)) ``` I read a batch of Kafka messages using `kafkaReceiver.receive()` and then I want to process them all and only after it to ask for another batch. And `processBatch` may take a long time (200 ms - 10 sec) username_6: Pleased to know it's not just me! username_9: This is not perfect, but I've created a Publisher doing what I was expecting. https://gist.github.com/username_9/e38834aa9df0c56f23e2d8d2e6899c78 username_10: We have the same issue (with reactor-kafka): @username_5, can we reopen the defect? ``` reactor.core.Exceptions$OverflowException: Could not emit buffer due to lack of requests at reactor.core.Exceptions.failWithOverflow(Exceptions.java:234) at reactor.core.publisher.FluxBufferTimeout$BufferTimeoutSubscriber.flushCallback(FluxBufferTimeout.java:219) at reactor.core.publisher.FluxBufferTimeout$BufferTimeoutSubscriber.onNext(FluxBufferTimeout.java:269) at reactor.core.publisher.FluxDistinctUntilChanged$DistinctUntilChangedSubscriber.tryOnNext(FluxDistinctUntilChanged.java:142) at reactor.core.publisher.FluxDistinctUntilChanged$DistinctUntilChangedSubscriber.onNext(FluxDistinctUntilChanged.java:95) at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:495) at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:555) at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:640) at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.onNext(FluxGroupBy.java:680) at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:205) ``` username_11: @username_5 The same issue is observed using reactor-core:3.4.12. ``` private final Sinks.Many<Item> sink = Sinks.many().unicast().onBackpressureBuffer(); //... sink.emitNext(update, OptimisticEmitFailureHandler.busyLooping(Duration.ofSeconds(2))); //.. elasticsearchService.bulkUpsert(sink.asFlux()).subscribe(); ``` ``` public Flux<Pair<Integer, Integer>> bulkUpsert(Flux<Item> updates) { return updates.bufferTimeout(bulkSize, Duration.ofSeconds(2)) .flatMap(bulk -> adapter.bulkUpsert(bulk)); } ``` If we change it to `buffer(bulkSize)` it works. ``` public class OptimisticEmitFailureHandler implements EmitFailureHandler { private final long deadline; public static EmitFailureHandler busyLooping(Duration duration){ return new OptimisticEmitFailureHandler(duration); } OptimisticEmitFailureHandler(Duration duration){ this.deadline = System.nanoTime() + duration.toNanos(); } @Override public boolean onEmitFailure(SignalType signalType, EmitResult emitResult) { return emitResult.equals(EmitResult.FAIL_NON_SERIALIZED) && (System.nanoTime() < this.deadline); } } ``` ``` ERROR parallel-3 r.c.p.Operators - Operator called default onErrorDropped reactor.core.Exceptions$ErrorCallbackNotImplemented: reactor.core.Exceptions$OverflowException: Could not emit buffer due to lack of requests Caused by: reactor.core.Exceptions$OverflowException: Could not emit buffer due to lack of requests at reactor.core.Exceptions.failWithOverflow(Exceptions.java:233) Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Assembly trace from producer [reactor.core.publisher.FluxBufferTimeout] : reactor.core.publisher.Flux.bufferTimeout com.upstreamsystems.profiling.profiles.service.elasticsearch.ProfileElasticsearchService.bulkUpsertUserProfile(ProfileElasticsearchService.java:100) Error has been observed at the following site(s): *__Flux.bufferTimeout ? at com.upstreamsystems.profiling.profiles.service.elasticsearch.ProfileElasticsearchService.bulkUpsertUserProfile(ProfileElasticsearchService.java:100) |_ Flux.flatMap ? at com.upstreamsystems.profiling.profiles.service.elasticsearch.ProfileElasticsearchService.bulkUpsertUserProfile(ProfileElasticsearchService.java:101) Original Stack Trace: at reactor.core.Exceptions.failWithOverflow(Exceptions.java:233) at reactor.core.publisher.FluxBufferTimeout$BufferTimeoutSubscriber.flushCallback(FluxBufferTimeout.java:227) at reactor.core.publisher.FluxBufferTimeout$BufferTimeoutSubscriber.lambda$new$0(FluxBufferTimeout.java:158) at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84) at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) ``` username_10: We have written non perfect tool using standard operators: ```java public static <T> Flux<List<T>> bufferTimeout(Flux<T> input, Duration flushInterval, int bufferSize, Scheduler onFlushScheduler, T nullValue, boolean passNullEvent) { return Flux.defer(() -> { AtomicReference<List<T>> tmp = new AtomicReference<>(); return input .mergeWith(Flux.interval(flushInterval) .onBackpressureDrop() .publishOn(onFlushScheduler) .map(v -> nullValue)) .<List<T>>handle((v, sink) -> { List<T> arr = tmp.get(); if (arr == null) { arr = new ArrayList<>(bufferSize); tmp.set(arr); } boolean send = false; if (v != nullValue) { arr.add(v); send = arr.size() >= bufferSize; } else if (!arr.isEmpty()) { send = true; } else if (passNullEvent) { arr.add(v); send = true; } if (send) { tmp.set(null); sink.next(arr); } }); }); } // Then we buffer some data and for example flush into Kafka every second or every 32 items emitted as List like this. // One just need to pass special marker instance that means nothing (NULL_VALUE). That one is used as eventing downstream // that timeout happened inFlux .transform(input -> FluxUtils.bufferTimeout(input, Duration.ofSeconds(1), 32, kafkaScheduler, NULL_VALUE, true)) ..... .transform(this::send) .then(); ```
alexreinert/homematic_check_mk
1119335519
Title: ip Tool wird auf raspberrymatic nicht gefunden. Question: username_0: Zeile 44 in /usr/local/addons/check_mk_agent/server.tcl unter Raspberrymatic ist das ip Tool in sbin installiert. sbin ist nicht im $PATH. Der absolute Pfad zum ip Tool muss daher in Zeile 44 angegeben werden: puts $channelId "[exec **/sbin**/ip link]"
wearhacks/main_wearhacks
122294237
Title: Typing in wearhacks.com leads to Error page Question: username_0: Alex reports that when typing in "wearhacks.com", it directs her browser (Chrome) to https://www.wearhacks.com, and leads to a "Your Connection is not Private" error page. See screenshot here: https://files.slack.com/files-pri/T02N93RGQ-F0GM79Y5T/screen_shot_2015-12-15_at_9.51.10_am.png Answers: username_1: that happens because we do not have SSL encryption and for some reason typing in wearhacks.com on alex's computer auto completes to 'https://www.wearhacks.com'. Notice the 's' after 'http'. That s stands for secure. So the browser is trying to access the website using the secure protocol when the website doesnt support that. Status: Issue closed
BobBowles/django-diary
224453594
Title: Designing Question: username_0: Hi, I'm New to Django. I'm Executing Diango-diary in local Syetem. But i can't optimize the design.If i want to add any input fields in login page or Register page where i want to change code. Answers: username_1: Hi Malarvi If you have downloaded the app from Github you have access to all the code locally so you can change it how you want. It has been a while since I was active developing Django-Diary but I tried to structure it in a 'conventional' way. The data model is defined in the diary/models.py file, for example. Changing the login and/or registration is a bit more fiddly, because I chose to subclass the User to create Customer. You will find the forms used in diary/admin.py If you have worked through the online tutorials, like DjangoGirls for example, that should give you enough knowledge to know where to start. For the rest, I find StackExchange and the Django online documentation very good. If you need more specific help I will need to know what you are trying to do. Best wishes Bob username_0: Hi, Thank you for your suggestions.I have one more error.That is while Admin add entry in Administrator site it shows the following **'NoneType' object has no attribute 'is_staff'** . Status: Issue closed username_0: Hi, I'm New to Django. I'm Executing Diango-diary in local Syetem. But i can't optimize the design.If i want to add any input fields in login page or Register page where i want to change code. username_1: That sounds like a typical Python error when a null object is passed as an argument. Possibly something was not initialized correctly. More information should be in the stack trace. username_0: I need to enable and add task reminders in current day also.I enabled current day but while add reminders it shows need to book ahead.So I need your help to change that. username_1: The reminders information is provided by the reminders function in views.py. The function currently generates a queryset of the relevant entries for today and tomorrow, discounting the ones that fall before the present time. Thus, if the page is generated at 13:30 on 3rd May, 2017, the entries in the queryset will include only those entries with a start date/time greater than 13:30, May 3rd 2017, and strictly less than 5th May, 2017. If you want to change what it does you need to change the queries in that function. Status: Issue closed
wjcnr/issues
854283841
Title: [BUG] False Anticheat Question: username_0: **Deskripsi Bug** Anticheat yang terlalu sensitif dan terkadang mengalami false detect, Auto kick ketika player turun dari pesawat dan meluncur menggunakan parasut, ter kick dengan reason Fly-hack **Cara Melihat/Mendapat Bug** Langkah-langkah agar bug bisa terlihat: 1. Terbang menggunakan shamal/pesawat 2. Turun dari pesawat 3. Meluncur menggunakan parasut dengan menekan W 4. dan akan ter kick otomatis dari server ... **Hal yang harusnya terjadi** Tidak mengalami kick otomatis karena bukan flyhack **Screenshots** https://media.discordapp.net/attachments/755015977577611286/829321867411193866/sa-mp-282.png?width=640&height=480 https://media.discordapp.net/attachments/755015977577611286/829327253724200970/unknown.png?width=640&height=480 **Versi Server** 3.0.8 **Penjelasan Tambahan**<issue_closed> Status: Issue closed
jlippold/tweakCompatible
445780535
Title: `Chrome Handoff` not working on iOS 12.1.1 Question: username_0: ``` { "packageId": "me.qusic.continuitybrowser", "action": "notworking", "userInfo": { "arch32": false, "packageId": "me.qusic.continuitybrowser", "deviceId": "iPhone8,1", "url": "http://cydia.saurik.com/package/me.qusic.continuitybrowser/", "iOSVersion": "12.1.1", "packageVersionIndexed": false, "packageName": "Chrome Handoff", "category": "Tweaks", "repository": "BigBoss", "name": "Chrome Handoff", "installed": "0.1", "packageIndexed": false, "packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.", "id": "me.qusic.continuitybrowser", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Handoff Webpage to Chrome", "latest": "0.1", "author": "Qusic", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "not working", "notes": "Appears to do nothing! " } ```<issue_closed> Status: Issue closed
cisco/openh264
1087288376
Title: Bug report on openh264( AddressSanitizer: heap-buffer-overflow) Question: username_0: Describe the bug A bug was found within the assimp. Though it might not be an intended use of the relevant API, the bug can still produce critical issues within a program using assimp. It would be best if the affected logic is checked beforehand. The bug was found with a fuzzer based on the function "CSliceBufferReallocatTest.Reallocate_in_one_partition". This may cause problems in the use of libraries How To Reproduce 1. Download the attached file 2. Execute make_openh264_bug1.sh 3. ./codec_unittest --gtest_filter=CSliceBufferReallocatTest.Reallocate_in_one_partition ==47445==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000052f8 at pc 0x000000f1b15d bp 0x7ffe979a7990 sp 0x7ffe979a7988 WRITE of size 8 at 0x6020000052f8 thread T0 #0 0xf1b15c in WelsCommon::WelsMalloc(unsigned int, char const*, unsigned int) openh264/codec/common/src/memory_align.cpp:95:55 #1 0xf1b489 in WelsCommon::CMemoryAlign::WelsMalloc(unsigned int, char const*) openh264/codec/common/src/memory_align.cpp:129:20 #2 0xf1b340 in WelsCommon::CMemoryAlign::WelsMallocz(unsigned int, char const*) openh264/codec/common/src/memory_align.cpp:118:20 Platform (please complete the following information): OS: Ubuntu 18.04 [openh264_bug2.tar.gz](https://github.com/cisco/openh264/files/7765986/openh264_bug2.tar.gz) Answers: username_1: pCtx->pSvcParam->iSpatialLayerNum = -2 is not supported. -2 is a invalid parameter. username_2: Hello, it seems that you know this code very well, would like to ask, do you know the Module this file directory is used for? What is the difference between gmp-openh264.cpp and common openh264? username_1: gmp is for Firefox. please use openh264 if you just want to use openh264 codec. username_1: if have no update, this issue will be closed later.
dependabot/feedback
332891723
Title: Bug: Commits entries need escaping to prevent unwanted modifications to local issues Question: username_0: The PRs that Dependabot generates do not properly escape the Commits entries, such that if they contain text such as `#1234` these issues will reference local issues, causing unwanted and confusing noise on those issues. It gets worse when the commit messages contain actions such as `Fixes #1234` and then closing the Dependabot PR will close some unrelated local issue. The attached screenshot is of a Rubocop update PR with a tooltip showing a reference to an issue that is local. ![image](https://user-images.githubusercontent.com/26596/41486165-4c7f09e2-7098-11e8-8f8f-dc3f5df2a7b0.png) Answers: username_1: Eek, hadn't seen that before - will fix right away. username_1: Can you link me an example PR with the ref spam in it, please? username_0: It's in a private repo, would that still be helpful? https://github.com/playfig/website/pull/6227 username_0: Here's a gist with the text from the PR https://gist.github.com/username_0/7c4f0370b03fa0fe7da2bf72a8b8d44b username_1: 👍, thanks. username_1: Fixed in https://github.com/dependabot/dependabot-core/commit/ceb2fc635451b9bcef9fcb711c670d2cc09c9ce5, which is deploying right now. Very sorry about that - looks like this was an edge case that we missed. I'll add tests for it so it can't regress later, but wanted to get this deployed straight away. Status: Issue closed username_0: Thanks! The speed at which you acknowledged and fixed this issue is truly impressive :)
RaiseTech-team01/reservation_app
976101046
Title: 試験:20210821_01 Question: username_0: # 試験概要 ## 目的 * 修正を適用したブランチで改修状況を確認 ## 試験表 https://docs.google.com/spreadsheets/d/1GBV-LMVO-VfI2blhbQo_AiHpF1ak9RhdqedK3GYW2hA/edit#gid=1052920068 ## 試験ブランチ https://github.com/RaiseTech-team01/reservation_app/tree/qa/i182-20210817_01 ## 試験環境 MacBook Pro (15-inch, Mid 2012), Catallina 10.15.7(19H1217) Answers: username_0: ## 試験結果 * 全項目数: 44 * 達成率(%): 63.6 * 内訳 - o: 28 - x: 6 - others: 10 Status: Issue closed username_0: 検出したバグをIssueとして登録しました。
HXLStandard/libhxl-python
317407435
Title: Include raw row number in error reports Question: username_0: If it is available (depending on the type of data source), include the raw row number in error reports. - [ ] add unit tests - [ ] (re)implement in hxl.model.Dataset and hxl.io input classes - [ ] support in validation output (see #132 ) - [ ] merge to test - [ ] update JSON validation spec to add _raw\_row_ - [ ] deploy to beta - [ ] update wiki Originally reported by @alexandru-m-g in HXLStandard/hxl-proxy#202 Status: Issue closed Answers: username_0: Deployed, and all docs updated. username_1: tested!
KC3Kai/KC3Kai
210952305
Title: Icon, Text misalignment Question: username_0: ### Versions: Google Chrome 56.0.2924.87 (Official Build) (64-bit) Windows 10 Education 64-bit KanColle Command Center 改 31.1.1 ### Steps to reproduce: Updated from previous released version 31.1.0 ### Actual behaviour: Misaligned text and icons on almost every page. No misalignment only on KC3改 settings page and in-game subtitles. Issue only appeared for me in this version. Issue persists even after trying to resize the panel and game windows. Tried to see if the issue persists on other extensions, websites but I did not observe similar problems. ### Expected behaviour: Aligned Text and icons ### Screenshots (if applicable): http://i.imgur.com/dnsyJt3.png http://i.imgur.com/cEOKdBW.png http://i.imgur.com/gv5NMds.png ### Error Report (if applicable): Answers: username_1: this is a known bug, see #1592 the cause is a bad extension installation on the side of chrome, no problem on your side nor on ours. chrome forgets to copy our "bootstrap" style sheet. some have even tried copying the missing file manually but chrome says no and gets corrupted. common solutions: * reinstall but will clear your strategy room * wait for next update as chrome will re-copy the files username_1: closing as duplicate Status: Issue closed
katzer/cordova-plugin-local-notifications
289912282
Title: Installation faild: Error: Unexpected token } in JSON at position 2898 Question: username_0: Hello Plugin install command not working: ``` cordova plugin add cordova-plugin-local-notification Fetching plugin "cordova-plugin-local-notification" via npm Error: Unexpected token } in JSON at position 2898 ``` ## My Environment * Plugin version: Lates * Platform: Android * OS version: 7 * Device manufacturer / model: Samsung * Cordova version: 6.5.0 * Cordova platform version: android 6.3.0 * Plugin config * Ionic Version (if using Ionic) ## Expected Behavior Plugin installation complete without any errors. ## Actual Behavior Throws error. ## Steps to Reproduce just a line command: cordova plugin add cordova-plugin-local-notification ## Context Install the plugin. ## Debug logs `Error: Unexpected token } in JSON at position 2898` Answers: username_1: Try again, worked for me 10 mins ago. username_2: Please reopen if its not solved. Looks more like a temporary issue by NPM Status: Issue closed
awslabs/aws-config-rdk
337048893
Title: Add support for central Config bucket Question: username_0: Add a flag to allow `init` to ignore missing Config bucket, for the use case where a central bucket in a separate account is being used to track Configuration history. Answers: username_1: This is important, since init seems to fail, when delivery_channels['DeliveryChannels'][0]['s3BucketName'] returns the name of a central bucket from another account. The init-process isn't aware that the bucket already exists (maybe it is enough to set "config_bucket_exists = True" at around line 118/119). Anyway: the call "response = my_s3.list_buckets()" will not return the name of the bucket (because the bucket is in another account). The then following create_bucket account will fail, because the bucket-name isn't available username_2: This issue should be closed as there is now a flag for this
Azure/azure-sdk-tools
1186698811
Title: [Python] Support overload in APIView Question: username_0: Examples: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-language-questionanswering/azure/ai/language/questionanswering/aio/_operations/_patch.py https://github.com/Azure/azure-sdk-for-python/blob/a1d223a38bd4dcc1c36bf57a11185022acee7fa5/sdk/schemaregistry/azure-schemaregistry-avroencoder/azure/schemaregistry/encoder/avroencoder/_schema_registry_avro_encoder.py https://github.com/Azure/azure-sdk-for-python/blob/e3e8135c83d0ca4cde1aae683b7304cfde39b276/sdk/containerregistry/azure-containerregistry/azure/containerregistry/aio/_async_container_registry_client.py#L573-L587
ftylitak/qzxing
279415532
Title: memory leak in DataMatrixDecodedBitStreamParser.cpp Question: username_0: The last line (brackets aside that is) of that file reads `delete bytes;` while it should be `delete [] bytes;` as the bytes variable was allocated (a few lines higher up) with the new [] operator: `char* bytes = new char[count];` Status: Issue closed Answers: username_1: It took me a while, though i have finally made the change. Thank you for your comment.
syuchan1005/OekakinoMorihiko
188449002
Title: 新規接続者へのキャンバス送信 Question: username_0: JavaScriptでcanvasオブジェクトを何らかにシリアライズして送信すれば可能? http://stackoverflow.com/questions/22228552/serialize-canvas-content-to-arraybuffer-and-deserialize-again Answers: username_1: 現在 [実装部分](https://github.com/username_1/OekakinoMorihiko/blob/master/src/main/java/com/github/username_1/oekakinomorihiko/WebSocketHandler.java#L40) にコメントアウトしてあります。 これはJavascript Image-pngの文字列を想定している=Binary送信なので圧縮を考えるのが手ですかね 圧縮することも出来ますが一部のブラウザ(!Chromium)では解凍が出来ない場合があるので各ブラウザの対応待ちです Status: Issue closed
Holzhaus/mixxx-gh-issue-migration
873126794
Title: Mixxx Crates are Broken Question: username_0: After the update to the trunk and 1.10 on 9-25-2011 the creates no long work correctly. Selecting a crate displays some tracks however selecting a different crate does not change the selected tracks. It appears that the first selected crate is displayed and subsequent crates are not loaded. Tested and confirmed with 1.10 and the trunk.<issue_closed> Status: Issue closed
jackc/pgx
640909122
Title: When querying an empty table, 'rows.Columns' has no value Question: username_0: When querying an empty table, 'rows.Columns' has no value,but `github.com/lib/pq` returns the correct result ```go package main import ( "database/sql" "github.com/davecgh/go-spew/spew" _ "github.com/jackc/pgx/v4/stdlib" _ "github.com/lib/pq" ) const dburl = "postgres://xxx:xxx@localhost:5432/postgres?sslmode=disable" func main() { func() { db, err := sql.Open("pgx", dburl) if err != nil { panic(err) } rows, err := db.Query("select 1 where 1=2") if err != nil { panic(err) } defer rows.Close() cols, err := rows.Columns() if err != nil { panic(err) } println("pgx columns:") spew.Dump(cols) }() func() { db, err := sql.Open("postgres", dburl) if err != nil { panic(err) } rows, err := db.Query("select 1 where 1=2") if err != nil { panic(err) } defer rows.Close() cols, err := rows.Columns() if err != nil { panic(err) } println("pq columns:") spew.Dump(cols) }() } ``` output: ``` pgx columns: ([]string) { } pq columns: ([]string) (len=1 cap=1) { (string) (len=8) "?column?" } ``` Answers: username_1: I think without a value it could be the best result. If there is a value, how could you check it? username_0: I want to get the field definition of the query, at least a list of field names. username_1: Try to update the required version of [pgconn](https://github.com/jackc/pgconn) if your current version is not v1.6.0. username_1: You can see [CHANGELOG.md](https://github.com/jackc/pgx/blob/master/CHANGELOG.md). There are some unreleased changes. username_0: Yes, that's fine. Thank you very much. Look forward to the release soon.
apache/trafficserver
1077219381
Title: trafficserver 9.1.1 OSX build failure Question: username_0: 👋 trying to upgrade trafficserver to the latest version, but failed with the following build errors. ``` Undefined symbols for architecture arm64: "thread-local wrapper routine for pluginThreadContext", referenced from: _TSRemapInit in plugin_v1_la-plugin_misc_cb.o _TSRemapNewInstance in plugin_v1_la-plugin_misc_cb.o ld: symbol(s) not found for architecture arm64 ``` ``` libtool: link: clang++ -o unit-tests/.libs/plugin_v1.so -bundle unit-tests/.libs/plugin_v1_la-plugin_misc_cb.o -L/usr/local/opt/[email protected]/lib -L/usr/local/Cellar/pcre/8.45/lib -g -O3 -Wl,-exported_symbols_list,unit-tests/.libs/plugin_v1-symbols.expsym Undefined symbols for architecture x86_64: "thread-local wrapper routine for pluginThreadContext", referenced from: _TSRemapInit in plugin_v2_la-plugin_misc_cb.o _TSRemapNewInstance in plugin_v2_la-plugin_misc_cb.o ld: symbol(s) not found for architecture x86_64 libtool: link: /usr/bin/nm -B unit-tests/.libs/plugin_init_fail_la-plugin_init_fail.o | sed -n -e 's/^.*[ ]\([BCDEGRST][BCDEGRST]*\)[ ][ ]*_\([_A-Za-z][_A-Za-z0-9]*\)$/\1 _\2 \2/p' | sed '/ __gnu_lto/d' | /usr/bin/sed 's/.* //' | sort | uniq > unit-tests/.libs/plugin_init_fail.exp clang: error: linker command failed with exit code 1 (use -v to see invocation) make[3]: *** [unit-tests/plugin_v2.la] Error 1 make[3]: *** Waiting for unfinished jobs.... Undefined symbols for architecture x86_64: "thread-local wrapper routine for pluginThreadContext", referenced from: _TSRemapInit in plugin_v1_la-plugin_misc_cb.o _TSRemapNewInstance in plugin_v1_la-plugin_misc_cb.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) ``` - relates to Homebrew/homebrew-core#90883<issue_closed> Status: Issue closed
getsentry/raven-php
99180029
Title: Application environment Question: username_0: What's the correct way to add application environment context? I'm currently doing this: ```php $client = new Raven_Client(Config::get('services.raven.dsn')); $context = [ 'extra' => [ 'environment' => App::environment(), 'host' => Request::server('HTTP_HOST'), 'server' => Request::server('SERVER_NAME'), ] ]; $client->captureException($e, $context); ``` Answers: username_0: But when I view exceptions being reported, there is no associated environment: ``` ENVIRONMENT No data available. ``` username_1: @username_0 The environment listed under request is related to the HTTP context (it's the env key): https://docs.getsentry.com/hosted/clientdev/interfaces/#context-interfaces Status: Issue closed
borgbackup/borg
241728136
Title: LockTimeout problem? Question: username_0: So, I am running this command: `borg list ssh://[email protected]/media/backup/matej` And got a bunch of errors: ``` Remote: Borg 1.0.7: exception in RPC call: Remote: Traceback (most recent call last): Remote: File "borg/locking.py", line 136, in acquire Remote: FileExistsError: [Errno 17] File exists: '/media/backup/matej/lock.exclusive' Remote: Remote: During handling of the above exception, another exception occurred: Remote: Remote: Traceback (most recent call last): Remote: File "borg/remote.py", line 98, in serve Remote: File "borg/remote.py", line 136, in open Remote: File "borg/repository.py", line 82, in __enter__ Remote: File "borg/repository.py", line 171, in open Remote: File "borg/locking.py", line 268, in acquire Remote: File "borg/locking.py", line 120, in __enter__ Remote: File "borg/locking.py", line 141, in acquire Remote: borg.locking.LockTimeout: /media/backup/matej/lock.exclusive Remote: Platform: Linux backup 4.4.34-v7+ #930 SMP Wed Nov 23 15:20:41 GMT 2016 armv7l Remote: Linux: debian 8.0 Remote: Borg: 1.0.7 Python: CPython 3.5.2 Remote: PID: 26403 CWD: /home/pi Remote: sys.argv: ['borg', 'serve', '--umask=077'] Remote: SSH_ORIGINAL_COMMAND: None Remote: ('Remote Exception (see remote log for the traceback)', 'LockTimeout') Platform: Linux cryptoloop 4.4.0-83-generic #106-Ubuntu SMP Mon Jun 26 17:54:43 UTC 2017 x86_64 x86_64 Linux: Ubuntu 16.04 xenial Borg: 1.0.7 Python: CPython 3.5.2 PID: 4324 CWD: /home/matej sys.argv: ['/usr/bin/borg', 'list', 'ssh://[email protected]//media/backup/matej'] SSH_ORIGINAL_COMMAND: None ``` And yes, I am using version 1.0.7 on both sides. The problem might be, I am running first initial backup at the same time. It seems there is a problem with lock. Is it true? Anyway, by my opinion error message should be much more straightforward. Status: Issue closed Answers: username_1: Fixed in [1.1 series](https://www.borgbackup.org/releases/) -- Duplicate of #768 et al
glasklart/hd
856636140
Title: Everlance: Mileage & Expense Question: username_0: **App Name:** Everlance: Mileage & Expense **Bundle ID:** com.everlance.everlance **iTunes ID:** <a target="_blank" href="http://getart.username_1.at?id=985378916">985378916</a> **iTunes URL:** <a target="_blank" href="https://apps.apple.com/us/app/everlance-mileage-expense/id985378916?uo=4">https://apps.apple.com/us/app/everlance-mileage-expense/id985378916?uo=4</a> **App Version:** 3.3.0 **Seller:** Everlance Inc. **Developer:** <a target="_blank" href="https://apps.apple.com/us/developer/everlance-inc/id966075356?uo=4">© Everlance Inc.</a> **Original Artwork:** <img src="https://is2-ssl.mzstatic.com/image/thumb/Purple114/v4/71/bb/81/71bb817b-5bea-5f9e-023a-ac3d8b6805d9/source/1024x1024bb.png" width="180" height="180" /> **Accepted Artwork:** \#\#\# THIS IS FOR GLASKLART MAINTAINERS DO NOT MODIFY THIS LINE OR WRITE BELOW IT. CONTRIBUTIONS AND COMMENTS SHOULD BE IN A SEPARATE COMMENT. \#\#\# Answers: username_0: ![ico](https://preview.username_1.at?image=https://user-images.githubusercontent.com/2313873/114509230-47e5a780-9c03-11eb-8256-21523d892a60.png) https://user-images.githubusercontent.com/2313873/114509230-47e5a780-9c03-11eb-8256-21523d892a60.png ___ Source: https://user-images.githubusercontent.com/2313873/114509211-42885d00-9c03-11eb-9fec-9f87e609e204.png Status: Issue closed
dart-lang/sdk
478381361
Title: Dart analysis crashes for unknown reason in Android Studio and VS Code Question: username_0: Analyzer Feedback from IntelliJ ## Version information - `IDEA AI-183.6156.11.34.5692245` - `2.4.0` - `AI-183.6156.11.34.5692245, JRE 1.8.0_152-release-1343-b01x64 JetBrains s.r.o, OS Windows 10(amd64) v10.0 , screens 3440x1440` ## Exception ``` Dart analysis server, SDK version 2.4.0, server version 1.27.1, error: Captured exception Unsupported operation: unsupported initializer: IndexExpressionImpl :: super()[selectedGroup] #0 AstBuilder.endInitializers (package:analyzer/src/fasta/ast_builder.dart:1333:9) #1 Parser.parseInitializers (package:front_end/src/fasta/parser/parser.dart:2578:14) #2 Parser.parseInitializersOpt (package:front_end/src/fasta/parser/parser.dart:2515:14) #3 Parser.parseMethod (package:front_end/src/fasta/parser/parser.dart:3241:13) #4 Parser.parseClassOrMixinMemberImpl (package:front_end/src/fasta/parser/parser.dart:3122:15) #5 Parser.parseClassOrMixinBody (package:front_end/src/fasta/parser/parser.dart:2898:15) #6 Parser.parseClass (package:front_end/src/fasta/parser/parser.dart:1783:13) #7 Parser.parseClassOrNamedMixinApplication (package:front_end/src/fasta/parser/parser.dart:1743:14) #8 Parser.parseTopLevelKeywordDeclaration (package:front_end/src/fasta/parser/parser.dart:570:14) #9 Parser.parseTopLevelDeclarationImpl (package:front_end/src/fasta/parser/parser.dart:466:14) #10 Parser.parseUnit (package:front_end/src/fasta/parser/parser.dart:348:15) #11 ParserAdapter.parseCompilationUnit2 (package:analyzer/src/generated/parser_fasta.dart:202:32) #12 ParserAdapter.parseCompilationUnit (package:analyzer/src/generated/parser_fasta.dart:197:12) #13 _File._parse (package:analyzer/src/services/available_declarations.dart:1656:23) #14 _File.refresh (package:analyzer/src/services/available_declarations.dart:1074:30) #15 DeclarationsTracker._getFileByPath (package:analyzer/src/services/available_declarations.dart:601:14) #16 DeclarationsTracker.doWork (package:analyzer/src/services/available_declarations.dart:514:18) #17 CompletionLibrariesWorker.performWork (package:analysis_server/src/domains/completion/available_suggestions.dart:277:13) <asynchronous suspension> #18 AnalysisDriverScheduler._run (package:analyzer/src/dart/analysis/driver.dart:2112:35) <asynchronous suspension> #19 AnalysisDriverScheduler.start (package:analyzer/src/dart/analysis/driver.dart:2062:5) #20 new AnalysisServer (package:analysis_server/src/analysis_server.dart:208:29) #21 SocketServer.createAnalysisServer (package:analysis_server/src/socket_server.dart:86:26) #22 StdioAnalysisServer.serveStdio (package:analysis_server/src/server/stdio_server.dart:37:18) #23 Driver.startAnalysisServer.<anonymous closure> (package:analysis_server/src/server/driver.dart:542:21) #24 _rootRun (dart:async/zone.dart:1124:13) #25 _CustomZone.run (dart:async/zone.dart:1021:19) #26 _runZoned (dart:async/zone.dart:1516:10) #27 runZoned (dart:async/zone.dart:1463:12) #28 Driver._captureExceptions (package:analysis_server/src/server/driver.dart:627:12) #29 Driver.startAnalysisServer (package:analysis_server/src/server/driver.dart:540:7) #30 Driver.start.<anonymous closure> (package:analysis_server/src/server/driver.dart:444:9) #31 _AsyncAwaitCompleter.start (dart:async-patch/async_patch.dart:49:6) #32 Driver.start.<anonymous closure> (package:analysis_server/src/server/driver.dart:439:43) #33 CompilerContext.runInContext.<anonymous closure>.<anonymous closure> (package:front_end/src/fasta/compiler_context.dart:122:46) #34 new Future.sync (dart:async/future.dart:224:31) #35 CompilerContext.runInContext.<anonymous closure> (package:front_end/src/fasta/compiler_context.dart:122:19) #36 _rootRun (dart:async/zone.dart:1124:13) #37 _CustomZone.run (dart:async/zone.dart:1021:19) #38 _runZoned (dart:async/zone.dart:1516:10) #39 runZoned (dart:async/zone.dart:1463:12) #40 CompilerContext.runInContext (package:front_end/src/fasta/compiler_context.dart:121:12) #41 CompilerContext.runWithDefaultOptions (package:front_end/src/fasta/compiler_context.dart:140:56) #42 Driver.start (package:analysis_server/src/server/driver.dart:439:21) #43 main (file:///C:/b/s/w/ir/k/src/third_party/dart/pkg/analysis_server/bin/server.dart:12:11) #44 _AsyncAwaitCompleter.start (dart:async-patch/async_patch.dart:49:6) #45 main (file:///C:/b/s/w/ir/k/src/third_party/dart/pkg/analysis_server/bin/server.dart:10:10) #46 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:299:32) #47 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12) ``` For additional log information, please append the contents of file://C:\Users\lewci\AppData\Local\Temp\report.txt. Status: Issue closed Answers: username_1: Should be fixed by https://dart-review.googlesource.com/c/sdk/+/113703
gshirato/SportsLabo
412927169
Title: Predicting soccer highlights from spatio-temporal match event streams Question: username_0: # どんな論文か ## 論文URL https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14743 ## 著者 <NAME>, <NAME>, <NAME>, <NAME> KU Leuven ## 掲載日 ## 形態・投稿先 AAAI # 内容 ## 要旨 Abstract ## 新規性など ## 手法・手順 Methods ## 結果 Results ## 考察 Discussion # コメント Comment
mobxjs/serializr
267254926
Title: problem when using generics Question: username_0: Hello i am trying to use generics ``` import { createModelSchema, primitive, reference, list, object, identifier, serialize, deserialize, getDefaultModelSchema, serializable, alias } from 'serializr'; export class GenericListResponse<T> { @serializable(alias('data', list(object(T))) list: T[] = []; } ``` But i am getting compilcation error here ``` [ts] 'T' only refers to a type, but is being used as a value here. any ``` And idea how i can achieve this. Answers: username_1: I apologize, I don't know Typescript that well, but from what I can infer here is `data` a self referential array of class T? username_2: @username_0 In my opinion, the generic `T` is just a `interface` that describes the shape of value. ```typescript list: T[] = []; ``` The above is okay because of this indicates what the shape of a item in `list` likes. But `object()` in serializr is a function that accepts a `ModelSchema` object or a `class` which can be found `ModelSchema` object inside it. Both types of value that `object()` needs are real values not a `interface` indicates the shape of a value. I don't know ts that well too ... But I think I understand this correctly. username_1: I think I understand. The issue with this example is that you need to instead pass a Class to the `object` function. You can do this like: ``` class Response { @serializable id; } export class GenericListResponse<T> { @serializable(alias('data', list(object(Response))) list: T[] = []; } ``` That list would then hold an array of `Response` class instances. Does that make sense? Status: Issue closed username_1: Hello i am trying to use generics ``` import { createModelSchema, primitive, reference, list, object, identifier, serialize, deserialize, getDefaultModelSchema, serializable, alias } from 'serializr'; export class GenericListResponse<T> { @serializable(alias('data', list(object(T))) list: T[] = []; } ``` But i am getting compilcation error here ``` [ts] 'T' only refers to a type, but is being used as a value here. any ``` And idea how i can achieve this. username_1: Sorry, didn't mean to close this yet! username_0: what i was trying to do is replace T which different of classes when using making api call like against data key. ForExample ``` deserialize(GenericListResponse<User>,listData) deserialize(GenericListResponse<Pets>,listData) deserialize(GenericListResponse<Cars>,listData) ``` The way you have mentioned will hardcode it with Response and whole point using generic will be lost. username_1: So if you have a polymorphism issue then the custom deserializer is your best bet. ``` class Generic { @serializable id; } class Cars extends Generic { @serializable(list()) cars; end export class GenericListResponse<T> { @serializable(alias('data', custom( dataToSerialize => filterModels.map(filterModel => serialize(dataToSerialize)), dataToDeserialize => dataToDeserialize.cars ? deserialize(Cars, dataToDeserialize) : deserialize(Generic, dataToDeserialize) ))) list: T[] = []; } ``` that custom function will deserialize with the class `Cars` when the dataToDeserialize has an attribute `cars` that is truthy. Is this more what you were referring to? username_1: Closed for inactivity Status: Issue closed
InnovativeOnlineIndustries/Emojiful
797771890
Title: Everything to the right of the cursor in chat is shifted to the left by one pixel (unless the cursor is at position 0) Question: username_0: ![temp](https://user-images.githubusercontent.com/14056899/106393262-1f039380-63c4-11eb-89e2-abb1de515786.gif) Emojiful Version: `emojiful-1.16.4-2.1.4.jar` Forge Version: `1.16.5-forge-36.0.13` No other mods are installed.
NASA-AMMOS/AIT-GUI
483685726
Title: DataArchive plugin causes GUI plugin to crash Question: username_0: Using today's (8/21/19) master branch of AIT-Core and PR #127 of AIT-GUI, running `ait-server` with the config below results in the GUI crashing with the log messages below. Removing the DataArchive plugin from the server config allows the GUI to run only the warning: ``` [GUI Playback Configuration]Unable to locate DataArchive plugin configuration for historical data queries. Historical telemetry playback will be disabled in monitoring UI and server endpoints. ``` ``` server: plugins: - plugin: name: ait.gui.AITGUIPlugin inputs: - telem_stream - log_stream outputs: - command_stream - plugin: name: ait.core.server.plugin.DataArchive datastore: ait.core.db.InfluxDBBackend inputs: - telem_stream ``` ``` 019-08-21T14:54:26.201 | WARNING | No handlers specified for stream log_stream 2019-08-21T14:54:26.202 | INFO | Added inbound stream <PortInputStream address=:2514> 2019-08-21T14:54:26.205 | INFO | Created handler PacketHandler for stream telem_stream 2019-08-21T14:54:26.206 | INFO | Added inbound stream <PortInputStream address=:3076> 2019-08-21T14:54:26.207 | INFO | Created handler PacketHandler for stream telem_stream2 2019-08-21T14:54:26.208 | INFO | Added inbound stream <PortInputStream address=:3078> 2019-08-21T14:54:26.209 | WARNING | No handlers specified for stream command_stream 2019-08-21T14:54:26.210 | INFO | Added outbound stream <PortOutputStream name=command_stream> 2019-08-21T14:54:26.265 | ERROR | <class 'requests.exceptions.ConnectionError'> creating plugin 0: HTTPConnectionPool(host='localhost', port=8086): Max retries exceeded with url: /query?q=SHOW+DATABASES (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10d69d950>: Failed to establish a new connection: [Errno 61] Connection refused',)) 2019-08-21T14:54:26.266 | WARNING | No plugin outputs specified for ait.core.server.plugin.DataArchive 2019-08-21T14:54:26.276 | ERROR | Unable to connect to ait.core.db.InfluxDBBackend backend. Disabling data archive. 2019-08-21T14:54:26.277 | ERROR | <class 'requests.exceptions.ConnectionError'> creating plugin 1: HTTPConnectionPool(host='localhost', port=8086): Max retries exceeded with url: /query?q=SHOW+DATABASES (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10d6bf310>: Failed to establish a new connection: [Errno 61] Connection refused',)) 2019-08-21T14:54:26.278 | WARNING | No valid plugin configurations found. No plugins will be added. 2019-08-21T14:54:26.278 | INFO | Starting <Broker at 0x10c9bc230> greenlet... 2019-08-21T14:54:26.279 | INFO | Starting <PortOutputStream name=command_stream> greenlet... 2019-08-21T14:54:26.280 | INFO | Starting <PortInputStream address=:2514> greenlet... 2019-08-21T14:54:26.280 | INFO | Starting <PortInputStream address=:3076> greenlet... 2019-08-21T14:54:26.281 | INFO | Starting <PortInputStream address=:3078> greenlet... 2019-08-21T14:54:26.282 | INFO | Starting broker... ``` Answers: username_1: Resolved in #127 and #130 Status: Issue closed
getsentry/sentry-native
594929008
Title: CMake imported target leaks C++14 requirement Question: username_0: The generated sentry-targets.cmake specifies 'INTERFACE_COMPILE_FEATURES "cxx_std_14"', which is causing me some difficulties. It's no problem for me to build the sentry libs using a C++14 compiler, but I don't want to switch my entire application to use it (currently on C++11) Is it intentional that C++14 requirement (from Crashpad) is exposed as an interface feature? Given that sentry.h exposes a C API, this seems strange. Answers: username_0: As an experiment, I tried removing the INTERFACE_COMPILE_FEATURES from the cmake file, and at least on macOS (Xcode 11.4, Apple clang version 11.0.3 (clang-1103.0.32.29)) everything builds and links fine. I'm about to start testing the Windows integration username_1: I think this is already fixed in master, can you try with that? CC @username_2 username_0: Thanks, I was using the latest stable release, will try master and report back. username_2: I think it's still present. I removed it in sentry-native in https://github.com/getsentry/sentry-native/commit/7b18bef0b7cdc258c4baf17a4903fa0a44b905c3#diff-af3b638bc2a3e6c650974192a53c7291 And added it to crashpad: https://github.com/getsentry/crashpad/blob/23474e346aa7c24a27506ee038b22bae74a5ce75/client/CMakeLists.txt#L60 It can be fixed by surrounding cxx_std_14 with the BUILD_INTERFACE generator expression username_0: Target "myFooTarget" links to target "sentry_crashpad::client" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing? Of course there is no such thing as 'sentry_crashpad::client' defined in any of the exported Cmake files. username_0: add_library(sentry::sentry SHARED IMPORTED) set_target_properties(sentry::sentry PROPERTIES INTERFACE_COMPILE_DEFINITIONS "SENTRY_WITH_UNWINDER_LIBBACKTRACE" INTERFACE_INCLUDE_DIRECTORIES "${_IMPORT_PREFIX}/include" INTERFACE_LINK_LIBRARIES "\$<\$<OR:\$<PLATFORM_ID:Linux>,\$<PLATFORM_ID:Android>>:-Wl,--build-id=sha1>;sentry_crashpad::client" ) This seems weird since I'm on a macOS build using the 'internal' crashpad included as a submodule. username_2: @username_0 The `sentry_crashpad::client` target is created by`sentry_crashpad-targets.cmake` (when you're building static libraries). Have you tried updating the git submodules? Because these `sentry_native` targets are a recent addition. Run `git fetch origin` and `git submodule update -r -f` inside the `sentry-native` directory to force updating them recursively. If that doesn't work, can you post the files in the cmake folder of your installed `sentry-native` to a gist (as not to pollute this issue with a lot of output). I don't own a mac so I can't test a similar setup. @username_1 It looks like the defines `SENTRY_WITH_UNWINDER_LIBBACKTRACE` are exported, I guess these defines can be made private? username_1: Yes I think so. username_0: @username_2 I think the confusion occurring is static-vs-shared libs. My CMake build of sentry was configure with no options aside from CMAKE_INSTALL_PREFIX. It's defaulted to building: - Crashpad library static (as libcrashpad_client.a) - libSentry as shared aka dynamic (libsentry.dylib) And at cmake install time, libSentry.dylib is installed to $prefix/lib/ of course - but Crashpad libraries are not (because they're static). Given this (which seems correct to me), these lines seem weird to me: `target_link_libraries(sentry PUBLIC $<BUILD_INTERFACE:crashpad::client> $<INSTALL_INTERFACE:sentry_crashpad::client> )` If libSentry *and* libCrashpad were static, it would make sense (libCrashpad is needed transitively by applications linking to libSentry). But I think for the case of a dynamic libSentry, 'PUBLIC' is the wrong specification here: it's a PRIVATE dependency. (This would also hold if libCrashpad were built dynamic, since the dynamic linker would deal with the transitive dependency, and libSentry users do not need to link directly to libCrashpad symbols) Of course, I'm still figuring this out, could be totally mistaken about how it's intended to work. username_0: As a test, I changed PUBLIC -> PRIVATE in the line above (root CMakeList.txt, line 287), and the generated sentry-targets.cmake now looks correct & works for me. username_1: That definitely makes sense, since crashpad is linked statically to sentry, so it does not need to be exported. username_0: I was unsure if there was a static configuration option (common in many CMake projects) where libSentry itself was built static. In that situation, the current version could make sense. username_2: @username_0 You're right about `CMakelists.txt:287`. That link connection should indeed be made `PRIVATE`. For a static `sentry` library, nothing will change. Users will have to link to sentry and all of its dependent static libraries. But currently, a consumer of a shared `sentry` library will also link to the crashpad client, which is unnecessary. Status: Issue closed username_0: @username_2 yes this sounds good to me, and my original issue is solved as well - thanks.
NKUST-ITC/NKUST-AP-API
497470275
Title: 請假功能 送出假單的資料格式問題 Question: username_0: 目前請假功能爬蟲部分已經完成 但在API串接上有部份格式需要討論 ## 目前 目前的`/leaves/submit` 需要post的資料是 `application/json` ```json { "days": [ { "day": "2019/03/03", "class": [ "A", "3", "5" ] }, { "day": "2019/03/05", "class": [ "2", "3" ] } ], "leaveType": "21", "teacherId": "10041", "reasonText": "我不當人類拉 JOJO!!!!!" } ``` 欄位上資料上並沒有任何問題 ## 問題 這次請假功能是需要包含「請假證明」的,必須要有傳遞圖片的方式 提供以下幾種討論 --- ### 1. 改為`multipart/form-data` 跟基本的form-data並無太大差異 POST的資料 ``` leavesData: "{"days":"2019/03/03".....}" leavesProof: image file ``` 如果將`leavesData`中的json改成form-data的欄位 會造成後端資料處理繁瑣,我的建議是 我給的POST資料這樣 (falcon 現階段不支援 multipart 可以透過內建cgi 或是外部庫擴充 ) (falcon 官方說是未來功能之一) --- ### 2-1. 一樣為json ,但透過第三方圖片伺服器 (前端可串接第三方的圖片上傳服務) 在原有的POST資料中加入`proof_url`的欄位 後端再依據欄位中的url去下載圖片並上傳給學校 --- ### 2-2. 一樣為json ,但透過API上傳圖片 再另開一個API提供上傳圖片的服務 舉例`/leaves/submit/proof/upload` 並返回一組 ``` { "image_hash":"fovobgpsd33dfefp" } ``` 在原有的API POST資料上加入這個欄位 --- ### 2-3. 一樣為json ,但透過base64包含在原json 圖片轉為base64後,放入原POST json資料中 Answers: username_0: @username_1 麻煩了 有哪邊看不懂的再通知一下 username_1: 我覺得方案一較為合適 讓API簡單一點就好 文件說明清楚即可 沒問題就改一下文件再實作 username_1: #46 Status: Issue closed
CocoaPods/cocoapods-downloader
251626181
Title: CocoaPods hangs on unpacking boost 1.59 when executing pod install Question: username_0: See discussion [here](https://github.com/CocoaPods/CocoaPods/issues/4830) Answers: username_0: Tried running the same commands: `curl -f -L -o file.tgz http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz` and `tar xfz file.tgz -C out` Seems to work as expected. I also added a simple test to the http_spec.rb to extract the boost tar file but it took it 7-8s. username_1: < HTTP/1.1 200 OK < Server: nginx < Date: Mon, 25 Sep 2017 20:11:55 GMT < Content-Type: text/html; charset=utf-8 < Content-Length: 114760 < Connection: close < Pragma: no-cache < Cache-Control: no-cache ... { [99 bytes data] 100 112k 100 112k 0 0 40706 0 0:00:02 0:00:02 --:--:-- 202k * Closing connection 2 ``` Seems that the returned response is the html page from sourceforge and not the requested binary. username_0: hi @username_1 - I see the same when running `curl` but my `pod install` runs as expected... username_1: @username_0 Can you run pod install in verbose mode? It should do the same as the curl cmd. username_0: $ /usr/bin/curl -f -L -o /<KEY>d20170930-94356-hcyz5b/file.tgz http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz --create-dirs --netrc-optional % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 178 100 178 0 0 539 0 --:--:-- --:--:-- --:--:-- 539 0 355 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 431 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 345 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 100 79.8M 100 79.8M 0 0 674k 0 0:02:01 0:02:01 --:--:-- 602k $ /usr/bin/tar xfz /var/<KEY>T/d20170930-94356-hcyz5b/file.tgz -C /var/<KEY>T/d20170930-94356-hcyz5b ``` username_0: Some network latency but the thing that really takes a long time is the tar command... username_1: Strange. Latency and size were not the issue on my side, but the redirects from sourceforge.net. Anyway, as a workaround I use a podspec that points to a different URL for downloading it. username_0: yep, we will do the same probably... username_2: Just came across this same issue when installing React Native dependencies. I can run the curl command no problem but CocoaPods just hangs on `/usr/bin/tar xfz /<KEY>171001-8272-uapq6q/file.tgz -C /<KEY>0171001-8272-uapq6q`. When I check the folder it looks like everything unpacked fine but CocoaPods is unsure what to do next? username_3: @username_1 @username_0 What URL did you end up using? username_1: @username_3 https://github.com/react-native-community/boost-for-react-native/releases/download/v1.63.0-0/boost_1_63_0.tar.gz from https://github.com/react-native-community/boost-for-react-native username_0: @username_3 - I kept it as it is (same one as in the first comments) username_4: I'm new to cocoa pods and I'm having this issue while using a react native app. @username_1 how do you go about using this custom url in a pod spec? I've been trying to digest documentation and figure it out, but I'm not having much luck. username_5: We are experienced the same problem every now and then on different machines. Downloading boost does not seem to be a problem, but after that Ruby uses up all CPU for about 10 minutes, then the installation succeeds. username_6: ## Quick Fix Find this file on your system and edit it's source url. https://github.com/CocoaPods/Specs/blob/d0ec5a65e80656c8d78e12ff19f251df879e0bc2/Specs/9/9/d/boost/1.59.0/boost.podspec.json ### Find the boost spec file locally ```bash open ~/.cocoapods/repos/master/Specs/9/9/d/boost/1.59.0/boost.podspec.json ``` ### Change the source download url to a direct link ```javascript "source": { "http": "https://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz?r=&ts=1513105136&use_mirror=kent" }, ``` ### Pod install with patience ```bash # everything should work - boost can be downloaded and unpacks # Note this takes a while to compile, be patient. pod install ``` username_7: How to achieve this on travis ?? username_8: To build upon @username_6's answer, what solved this for me was to go to the SourceForge page [here](https://downloads.sourceforge.net/project/boost/boost/1.59.0) and get the link it provides for you when you click the `boost_1_59_0.tar.gz` file (or simply [click here](https://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz)). You can get it by right clicking the "direct link" link they have and press "copy link". Then paste it into the source like @username_6 showed above. and run `pod install`. Be prepared, it takes forever. username_9: $ /usr/bin/curl -f -L -o /<KEY>T/d20180102-25086-1otwbdt/file.tgz http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz --create-dirs --netrc-optional % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 178 100 178 0 0 576 0 --:--:-- --:--:-- --:--:-- 577 0 355 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 431 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 345 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 100 79.8M 100 79.8M 0 0 1192k 0 0:01:08 0:01:08 --:--:-- 3639k $ /usr/bin/tar xfz /var/<KEY>d20180102-25086-1otwbdt/file.tgz -C /var/<KEY>T/d20180102-25086-1otwbdt ``` Sampler results: https://gist.github.com/username_9/613107727d86ae12384865113ab97d93 username_10: Any news on this? I've found out that the root cause is indeed an invalid tar.gz file for boost that only contains plain html code. Pls find more info here: https://github.com/CocoaPods/CocoaPods/issues/4830#issuecomment-355219321 username_11: I did a test and disabling cocoapods cache solves that issue. digging around I've saw that it extracts boost to the cache directory ~500MB of data and then it has another copy to do into the target directory, all that copies takes a loot of time and make it look like it hangs. You can disable it by creating a ~/.cocoapods/config.yaml file with the following contents. skip_download_cache: true username_12: change the source at https://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz and save the spec file. issue gone. contact me via <EMAIL>, if you want further explanation. username_13: I had the following error. ``` [!] Error installing boost [!] /usr/local/php5/bin/curl -f -L -o /<KEY>T/d20180328-76160-oa20vt/file.tgz https://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz?r=&ts=1513105136&use_mirror=kent --create-dirs --netrc-optional % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (35) Unknown SSL protocol error in connection to downloads.sourceforge.net:443 ``` Open `~/.cocoapods/repos/master/Specs/9/9/d/boost/1.59.0/boost.podspec.json` Change the URL from this ```json "source": { "http": "https://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz?r=&ts=1513105136&use_mirror=kent" } ``` to this ```json "source": { "http": "http://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz?r=&ts=1513105136&use_mirror=kent" } ``` username_14: Updating the downloading link: ```https://jaist.dl.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.gz``` username_15: Thanks @username_6 . Or `open ~/.cocoapods/repos/cocoapods/Specs/9/9/d/boost1.59.0/boost.podspec.json` username_16: After constantly having this problem, i put `skip_download_cache: true` in `~/.cocoapods/config.yml` (didn't work), then in `~/.cocoapods/config.yaml` which appears to work (no cached stuff at `~/Library/Caches/CocoaPods`). No hanged install anymore.
cwtickle/danoniplus
644412301
Title: [要望] 矢印塗りつぶし部分の色変化の実装 Question: username_0: ## 改善詳細 / Details of Improvement - v13で矢印塗りつぶし部分の初期色設定を実装したが、色変化が未実装。 他の色変化系と同様に、色変化できるようにしたい。 ## 期待する見せ方・挙動 / Expected Behavior - shadowcolor_data / ashadowcolor_data にて実装する。 仕様は color_data, acolor_dataに準拠とする。 - ただし、矢印塗りつぶしについては`Default`指定(元の矢印・フリーズアローの色に準拠、アルファ自動設定)があるため、この指定にも対応できるようにすること。 ## その他検討事項 / Other Considerations - `Default`指定は後追いの実装でも良い。(ややこしい実装となるため)
HyeIn-Kim/MineSweeper
449273314
Title: 특수문자 오류 Question: username_0: drawingfunctions.c 파일에서 `printf("★");` `printf("○");` `printf("▶");` 부분의 특수문자가 오류가 생깁니다. <img width="614" alt="오류" src="https://user-images.githubusercontent.com/48882165/58482155-d6301c00-8198-11e9-90c6-d7c65f1a9134.PNG"> Answers: username_0: 인코딩문제로 유니코드로 다시한번 저장하면 정상적으로 작동됩니다. Status: Issue closed
sipb/machine-room
314811997
Title: 2018 IP address renumbering Question: username_0: We're getting renumbered out of our 192.168.3.11/16 address block, due to IS&T selling 192.168.3.11/9. Coordination of addressing this issue is being handled separately. Contact the vice chair if you need information. Status: Issue closed Answers: username_1: This was finished. :cry:
mml2222/MakingWithMonsters
437965590
Title: Pick a Monster button and dialog Question: username_0: **When** user's project is in progress And users is on project home page **then** user sees button "Pick a Monster" **When** user clicks button "Pick a Monster" **Then** user is prompted to select a monster ("Choose the Monster you want to work with") And the user sees an array of all the monsters they predicted And the user sees a button labeled "Cancel" **When** the user clicks on a monster **Then** the user sees a button labeled "Next" **When** the user clicks "Next" **Then** the user sees "Use this monster to help you with your project. Come back here to share your progress!" and the button "Cancel" **When** the user clicks "Share Progress" **Then** the user is prompted to upload a photo And the user sees the button "Cancel" **When** the user uploads a photo **Then** the user sees the button "Submit" and the button "Cancel" **When** the user clicks submit **Then** the photo and monster are saved to the user's Project's Monster Moments And the user is redirected to their project home page **When** the user clicks cancel **Then** nothing is saved and the user is redirected to their project home page Answers: username_0: @username_1 are you ok with me working on this issue and you working on #11? (For #11 you should complete the pull request for #5 before branching, because it will involve building off of what I've done for #5) username_1: Ok! Status: Issue closed
alnv/catalog-manager
247119755
Title: Freie alias Auswahl. Question: username_0: Der Backend User darf selber bestimmen, welches seiner Felder als Alias fungieren soll. Answers: username_0: siehe #47 username_1: Es wäre schön, wenn man den Alias optional auch aus mehreren Datenfeldern bilden und auf Wunsch auch einen Timestamp mit verwenden könnte. Beispiele: FamiliennameVornameGebutsdatum = MustermannHans1501683093 Artikelbezeichnung-Datum der (ersten) Erstellung des Datensatzes = Kugelschreiber-1501683211 Dadurch erreicht man auf jeden Fall eine Eindeutigkeit. username_0: -> wird auf Catalog Manager 2.0 verschoben. Jetzt nicht realisierbar. Status: Issue closed username_0: Der Backend User darf selber bestimmen, welches seiner Felder als Alias fungieren soll.
guigrpa/docx-templates
667644602
Title: Error when use docx-templates in typescript Question: username_0: I have a issue when use docx-templates with typescript. When I build code it show ` node_modules/docx-templates/lib/main.d.ts(3,8): error TS1259: Module '"/tmp/node_modules/jszip/index"' can only be default-imported using the 'esModuleInterop' flag src/service/FollowupService.ts(7,8): error TS1259: Module '"/tmp/node_modules/moment/ts3.1-typings/moment"' can only be default-imported using the 'esModuleInterop' flag ` I don't want to use esModuleInterop: true because it will break other code. Can you help me? Answers: username_1: Can you provide me with a minimal example project so I can see if I fixed this correctly? I'm not sure exactly what I should change for this library to work with your compiler configuration. Let me know if you have any pointers. Status: Issue closed username_0: I have fixed it. Because I config wrong tsconfig file username_1: Great! Thanks for taking the time to close the issue 👍
phantranggg/ITSS-Mongu
524741899
Title: Icon trang chủ chưa hiển thị Question: username_0: ![image](https://user-images.githubusercontent.com/27731405/69112512-fd660700-0ab2-11ea-95c9-e50ad1c54685.png) Hoặc ![image](https://user-images.githubusercontent.com/27731405/69112535-07880580-0ab3-11ea-8c74-1e437cea9a4e.png)<issue_closed> Status: Issue closed
Hacker0x01/react-datepicker
336325712
Title: "Clear" button receives "enter" click from different fields Question: username_0: (If you have multiple date pickers, then pressing enter again will cause the second one to clear). Answers: username_0: This is caused by the clear button not having `type="button"`. The default for the `type` attribute for `button` elements is "submit", and when you press enter on a form, it will send a click event to the first button where `type="submit"` (or type is undefined, and therefore defaults to submit). Adding `type="button"` to the clear button will resolve this issue. username_0: resolved in F86a8e4 Status: Issue closed
helloSystem/ISO
776528816
Title: Remote Assistance scrollbar not working as expected Question: username_0: With the display of the remote system initially a little to large for the local system, I found one dock partially overlaying another, with transparency, which was disorientating. I used the scrollbar (to avoid the overlay), then aimed to reduce the resolution of the remote display but could not click on the green arrow to confirm the reduction: ![2020-12-30 15:35:42](https://user-images.githubusercontent.com/192271/103365933-3398f700-4ab9-11eb-9b28-5e4ec738307d.png) Further use of the scrollbar scrolled down but never up: ![2020-12-30 15:36:46](https://user-images.githubusercontent.com/192271/103365969-44e20380-4ab9-11eb-8ff5-9d2a03a6911f.png) – and so on, and the green arrow remained unreachable for click purposes. Screen recording: [2020-12-30 15:47.tar.gz](https://github.com/helloSystem/ISO/files/5754835/2020-12-30.15.47.tar.gz) Answers: username_1: Not sure whether we can do anyhting about this, or whether this needs to be addressed in the underlying `ssvncviewer` application that is used to display the remote VNC screen. Can you try to find out whether the bug can be reproduced outside of Remote Assistance? Thanks. username_0: Thanks, I suspected a bug in something underlying … username_1: Try `ssvncviewer --help` ;-) username_0: Thanks. Also found: https://www.freshports.org/net/ssvnc/ Typo: `ssvncviewer -help` (I found it already installed.) username_0: I discovered a workaround: 1. drag the scroller upwards (no effect) 2. do not release the scroller 3. key up <kbd>↑</kbd> repeatedly until the top is reached 4. release the scroller. So, for example, from this: ![2020-12-30 17:02:21](https://user-images.githubusercontent.com/192271/103369087-2b44ba00-4ac1-11eb-94f2-2f43212fc946.png) – to this: ![2020-12-30 17:02:53](https://user-images.githubusercontent.com/192271/103369109-34358b80-4ac1-11eb-8320-647263f319ef.png) username_1: Are you running the server and the client on the same machine? In my tests, this gave strange behavior sometimes. username_0: No, still server on the Ergo Vista 631 and client in VirtualBox. Re: https://www.freebsd.org/cgi/man.cgi?query=ssvncviewer&manpath=ports#Enhanced_TightVNC_Viewer_(SSVNC)_OPTIONS can we experiment with a greater width for the scrollbar? `-sbwidth n` I don't know what's most Mac-like but I imagine five or six pixels. Thanks username_1: Sure, feel free to exeriment and if you like it then you might want to send a Pull Request. Thanks! username_0: I discovered the (obvious) F8 route to changing the width. No improvement. I'll continue to experiment. username_0: Uploading 2021-01-31 11:23.mp4… Nothing much to see here. Partly: me repeatedly confused by an inability to resize a window that's apparently _not_ maximised when technically it _is_ maximised. Around 01:00 on the timeline there's reproduction of the scrollbar issue. username_1: This is probably an issue of the underlying http://www.karlrunge.com/x11vnc/ssvnc.html? username_0: Thanks, https://www.karlrunge.com/x11vnc/ssvnc.html does not respond, I think I had the same problem a few weeks ago … it's in the Wayback Machine, https://web.archive.org/web/20201111224641/https://www.karlrunge.com/x11vnc/ssvnc.html
sugastreet22/furima-32555
782768814
Title: 【依頼】フリマアプリ挙動確認 Question: username_0: #自身のフリマアプリのURL https://furima-32555.herokuapp.com/ #Basic認証のIDとパスワード ID     admin パスワード 2222 #出品者用のメールアドレスとパスワード メールアドレス <EMAIL> パスワード   <PASSWORD> #購入者用のメールアドレスとパスワード メールアドレス <EMAIL> パスワード   <PASSWORD> #購入用カードの番号・期限・セキュリティコード 番号  4242424242424242 期限  3月 23年 セキュリティコード 123 Answers: username_1: ご提出ありがとうございます! 挙動確認させて頂いたところ、以下の不備が見受けられましたので修正をお願いします! [修正箇所] - [ ] 出品者は商品情報を削除できるようにしましょう 削除ボタンを押すと、エラーページに遷移し削除ができない状態です 修正でき次第、以下の最終課題完了報告フォームを再度提出お願いします! ※新しいissueを作っていただく必要はありません https://forms.gle/dYuR1wjeHepbEhU7A username_0: 出品者は商品情報を削除できるように実装したのでご確認お願い致します username_2: @username_0 さん、ご提出ありがとうございます‼︎ 修正箇所が問題なく正常に動作することを確認いたしました! LGTMです✨おめでとうございます🎉 実用的なアプリケーションを作成できる技術力が着実に身についていると感じられました。 この後は、最終課題発表会までの間でさらなる追加実装や、オリジナルアプリの開発となってまいります。 これまでに培ったものを活かした素晴らしいものとなるよう、私達も心より応援しております🍀 **※確認したのは必須機能のみです。追加実装した機能については確認しておりません。 そのため追加機能を実装している場合は、正しく想定通りの挙動であるか、再度ご自身でご確認をお願い致します。**
Lusito/forget-me-not
351936002
Title: cookies from unloaded tabs not cleared on startup if "Apply Rules" is checked Question: username_0: Steps to reproduce: 1. In FMN Settings, under "Clean on browser start," check "Cookies" and "Apply Rules." 2. In about:preferences#general, under "Startup," check "Restore previous session." Keep this tab open. 3. Open a couple of tabs from sites that set cookies, and check to see that cookies have actually been set for those domains. Keep their tabs open as well. 4. Navigate back to about:preferences so that it's the active tab. 5. Close and restart browser. Only the about:preferences tab should be loaded; the others should not. 6. Check cookies again. Expected behavior: The cookies from the unloaded tabs should have been cleared. Actual behavior: The cookies from the unloaded tabs are still there. Notes: I've been using Cookie AutoDelete, which does clear cookies from unloaded tabs on startup, so maybe that's why my expectations are what they are. But C-AD's behavior does seem more logical to me: When you close the browser, you close the tab, so its cookies should be deleted. I've also found that FMN _will_ clear cookies from unloaded tabs on startup if you uncheck the relevant "Apply Rules" box – but then FMN disregards _all_ rules, so for example your whitelisted cookies get cleared as well. I would normally assume that "Apply Rules" refers only to user-defined rules, not to open tabs. I'm also aware of #54, but that seems to be only/mainly about tabs that are unloaded while the browser is running. I agree with [your comment there](https://github.com/username_1/forget-me-not/issues/54#issuecomment-383698565) that cookies from such tabs should be kept. But startup seems like a different situation to me. (FMN 1.0.2 on FF 61.0.2) Answers: username_1: Expected behavior in FMN is that cookies don't get cleared for tabs that are still open. unloaded or not should not matter. Maybe there could be an option for users who expect them to be cleared, as it seems some seem to want this. I personally don't want to get logged out of things that are still open. username_0: My unloaded tabs have nothing to do with logins; they're mainly reminders for things I want to read/research. So maybe my problem is not so much an FMN problem as it is a username_0's time/tab management problem ;-) But here's another thought: If you do decide to make this an option for those who want it, there's a simple* way to do that: Make it the default behavior. Then, if you want to protect cookies in unloaded tabs on startup – to keep a login active or for any other reason – all you have to do is whitelist them. * Conceptually simple, that is – I'm not a webextension developer, so I have no idea whether it would be easy to implement or not; please forgive me if this is an annoying suggestion for you, the one who would actually have to do the work. username_2: ![default](https://user-images.githubusercontent.com/42351556/44046409-8c1adec8-9f55-11e8-9520-0a60ea76aae2.jpg) - [ ] - [ ] [flash_player_30_0_admin_guide.pdf](https://github.com/username_1/forget-me-not/files/2321560/flash_player_30_0_admin_guide.pdf) username_3: I've been using _Cookie AutoDelete_, and now _Self-Destructing Cookies (WebEx)_ because cookies related to closed tabs are cleared immediately when those tabs are closed and there's no other instance of those domains in remaining tabs. It avoid tracking cookies to track any further, and it comforts people with privacy concerns. Although, these extensions have a switch to only delete cookies on the browser startup/exit too. So +1 for having such feature in FMN.
isha21/Project-4
250762134
Title: Redundant .gitignore Question: username_0: Inside the plugins folder there is a .gitignore, since we already have a gitignore in the root of our project (the root of the wp-content), the one in the plugins folder is unnecessary! - [ ] Delete .gitignore in the plugin folder (it is extra) Answers: username_1: done Status: Issue closed
yoanlcq/vek
608974032
Title: Add better numerical type conversion functions Question: username_0: Writing `vec.numcast().unwrap()` is really noisy, especially where I don't expect any conversion errors and don't want to handle them. To avoid unnecessary checks and panics I keep writing `vec.map(|v| v as f32)` which is still not ideal... So I suggest adding a new function for conversion between different numerical types, using `num_traits::cast::AsPrimitive`. In most cases, precision loss and overflowing isn't something you should worry about, especially in graphics programming (e.g. converting u32 framebuffer extent to f32). Perhaps, the old `numcast` function should be renamed to `try_cast` and a new panicking `cast` be introduced to match euclid behaviour. But that's another question... Answers: username_1: I must confess I did not understand what you mean in this part; A bit more detail would help (thanks in advance and sorry!) Status: Issue closed username_1: This should be fixed; Although I close the issue with my commit, feel free to add to the discussion regardless. I'll push a new version with this change very soon. username_0: I mean, euclid has `try_cast` and `cast`, which is basically `try_cast().unwrap()`. Not sure if it's necessary, but it saves a few keystrokes if you don't want to handle the error, but an overflow would break the subsequent code, so you want to keep the check. Also, how do you think about adding `as_iXX/as_uXX` in addition to `as_()`? Looks way better than `as_::<i32>()`, but that's quite a lot of new functions. username_1: From personal experience, this isn't ideal; bloating the library with as_foo for every primitive type isn't worth the benefit of avoiding typing `as_::<foo>`, which is only 4 characters longer, assuming you have to specify the turbofish at all (in some cases the type is inferred, for instance). To me, it's a balance between "there should be only one way to do this kind of task" (e.g keep `as_()`) and "this task is performed so often that it deserves special shortcuts" (e.g add `as_iXX()` etc). I'm imagining adding `as_xxx()` for all (something like 12?) primitive types; do it for vectors, then matrices, then geometric shapes... After all, they now do implement `as_()`, so we might as well be consistent. That would quickly add up, again just to save 4 characters which can even be completely omitted in some contexts... The "balance" isn't right, if that makes sense. username_0: I understand. For me, `vec.as_()` is more than enough, thanks for adding.
minimaxir/hacker-news-undocumented
413823279
Title: Add behavior of not replying to own comments's replies Question: username_0: In my experience HN is not made for discussions except if one is the main person concerned by the post. I actually prefer editing my comment instead of replying to not bloat the discussion, but I think most people just don't reply either because they don't check the replies (very likely) and/or do not want to engage in a public discussion/debate/flamewar. Answers: username_1: Also on HN there is a sense that an opinion does not *belong* to someone hence why other people can (and are encouraged) to respond to criticism in place of the parent.
jupyter-widgets/ipywidgets
336378921
Title: Plot Issue: Failed to display Jupyter Widget of type FigureCanvasNbAgg Question: username_0: I'm trying to create interactive plots. My JupyterLab server is running in the cloud, in a Google Compute Engine Instance. When I try to plot some data, i got this: ![41950377-4788a594-799c-11e8-8c7b-8bcf455490f4](https://user-images.githubusercontent.com/22618324/41998361-f106cb70-7a30-11e8-8b6d-a5ff5e9dc15f.png) Answers: username_1: Do you have the ipywidgets lab extension installed? username_1: `jupyter labextension list` to see the extensions you've installed... username_0: Hello, Jason, Thanks for the help. That's what I get: ``` rodrigo_tadewald@username_0-jupyterlab:~$ jupyter labextension list JupyterLab v0.32.1 Known labextensions: app dir: /home/rodrigo_tadewald/anaconda3/share/jupyter/lab @jupyter-widgets/jupyterlab-manager @jupyter-widgets/jupyterlab-manager v0.35.0 enabled OK @jupyterlab/plotly-extension @jupyterlab/plotly-extension v0.16.0 enabled OK jupyter-matplotlib jupyter-matplotlib v0.1.0 enabled OK ``` username_1: And what version of ipywidgets do you have installed in the kernel? `import ipywidgets; ipywidgets.__version__` username_0: My version is '7.1.1'. username_1: Does it help to upgrade to ipywidgets 7.2? What version of ipympl do you have installed? Can you try in a fresh environment with a fresh install of ipympl? I think it has had some updates in the last few weeks. username_2: Having the same issue with ``` JupyterLab v0.35.3 Known labextensions: app dir: /home/andy/anaconda3/envs/py37/share/jupyter/lab jupyter-matplotlib v0.3.0 enabled OK ``` ipywidgets 7.4.2 ipympl 0.2.1 username_2: Solved, I had installed the pip way and needed to run ``` jupyter labextension install @jupyter-widgets/jupyterlab-manager jupyter labextension install jupyter-matplotlib ``` It had prompted me if I want to install ``jupyter-matplotlib`` earlier and that ran some node installation but apparently I still needed to run these two lines.
lanl-ansi/PowerModels.jl
396206285
Title: Simplify Generator Data in PSSE Parser Question: username_0: Unless explicitly specified by the PSSE network data, the following generator fields should be omitted from the PowerModels data model: Pc1 Pc2 Qc1min Qc1max Qc2min Qc2max ramp_agc ramp_10 ramp_30 ramp_q apf.<issue_closed> Status: Issue closed
serilog/serilog
457152152
Title: Numeric format specifier not respected Answers: username_1: Thanks for the note! This is indeed a bug; I think it might have evaded us because it does not affect the console sink; I get the correct output in _Serilog.Sinks.Console_, but in _Serilog.Sinks.File_ I see: ``` 2019-06-19 00:56:24.724 +10:00 [ERR] Status [0x16] ``` The code that _should_ be responsible for passing through the correct numeric format is: https://github.com/serilog/serilog/blob/dev/src/Serilog/Events/ScalarValue.cs#L90 - needs some breakpoint debugging to work out what's gone wrong. username_2: I think it is a bug in Serilog.Sinks.File. ScalarValue.Render() is not called when using file sink username_2: Here is what I found: File sink internally use MessageTemplateTextFormatter. When I called ``` using (var log = new LoggerConfiguration() .WriteTo.File(@"123.txt") .CreateLogger()) { log.Information("x = {x:X8}", 16); Console.WriteLine("123"); } ``` Then, on MessageTemplateTextFormatter.cs line 90-93 ``` if (pt.PropertyName == OutputProperties.MessagePropertyName) { MessageTemplateRenderer.Render(logEvent.MessageTemplate, logEvent.Properties, writer, pt.Format, _formatProvider); } ``` logEvent.Template is "{x = {x:X8}}" but pt.Format is "lj" (pt is PropertyToken "{{Message:lj}}") Since the "format" argument of MessageTemplateRenderer.Render(), pt.Format, starts with "l", "x" is treated as a literal in MessageTemplateRenderer.RenderPropertyToken(). As a result, x is rendered as "16" username_1: Thanks for looking into this; you've reminded me that we've been looking at some related behavior lately (CC @tsimbalar). It's going to require some thought, at least around how to avoid the confusion/footgun. username_2: So I guess we are not going to fix this at this moment? username_1: @username_2 I have #1325 in the works Status: Issue closed
urllib3/urllib3
342621842
Title: sdist has world-writable permissions Question: username_0: The sdist for urllib3 1.23 on PyPI has some surprising world-writable permissions on a number of files: ``` root@6821c0c367c3:~/urllib3-1.23# ls -lan total 152 drwxrwxrwx 8 1000 1000 4096 Jul 19 07:56 . drwx------ 1 0 0 4096 Jul 19 07:56 .. -rwxrwxrwx 1 1000 1000 28535 Jun 5 03:16 CHANGES.rst -rwxrwxrwx 1 1000 1000 8542 Jun 5 03:16 CONTRIBUTORS.txt -rwxrwxrwx 1 1000 1000 1175 Jun 5 03:04 LICENSE.txt -rwxrwxrwx 1 1000 1000 204 Jun 5 03:04 MANIFEST.in -rwxrwxrwx 1 1000 1000 1192 Jun 5 03:16 Makefile -rwxrwxrwx 1 1000 1000 41888 Jun 5 03:22 PKG-INFO -rwxrwxrwx 1 1000 1000 3780 Jun 5 03:16 README.rst drwxr-xr-x 3 0 0 4096 Jul 19 07:56 build -rwxrwxrwx 1 1000 1000 212 Jun 5 03:16 dev-requirements.txt drwxrwxrwx 4 1000 1000 4096 Jun 5 03:22 docs drwxrwxrwx 3 1000 1000 4096 Jun 5 03:22 dummyserver -rw-r--r-- 1 0 0 5966 Jul 19 07:56 record.txt -rwxrwxrwx 1 1000 1000 620 Jun 5 03:22 setup.cfg -rwxrwxrwx 1 1000 1000 2899 Jun 5 03:16 setup.py drwxrwxrwx 5 1000 1000 4096 Jun 5 03:22 test drwxrwxrwx 5 1000 1000 4096 Jun 5 03:22 urllib3 drwxrwxrwx 2 1000 1000 4096 Jun 5 03:22 urllib3.egg-info ``` Installing this with the old `python setup.py install --single-version-externally-managed` command results in world-writable metadata files being installed with the package (in .egg-info). Installing with pip does not appear to have this issue. I ran the release.sh script on my system and got much more expected permissions: ``` $ ls -la urllib3-dev total 140 drwxr-xr-x 6 root root 4096 Jul 19 08:03 . drwxr-xr-x 3 root root 4096 Jul 19 08:03 .. -rw-r--r-- 1 root root 28841 Jul 19 08:03 CHANGES.rst -rw-r--r-- 1 root root 8667 Jul 19 08:03 CONTRIBUTORS.txt -rw-r--r-- 1 root root 1175 Jul 19 08:03 LICENSE.txt -rw-r--r-- 1 root root 204 Jul 19 08:03 MANIFEST.in -rw-r--r-- 1 root root 1192 Jul 19 08:03 Makefile -rw-r--r-- 1 root root 42281 Jul 19 08:03 PKG-INFO -rw-r--r-- 1 root root 3780 Jul 19 08:03 README.rst -rw-r--r-- 1 root root 221 Jul 19 08:03 dev-requirements.txt drwxr-xr-x 4 root root 4096 Jul 19 08:03 docs drwxr-xr-x 3 root root 4096 Jul 19 08:03 dummyserver -rw-r--r-- 1 root root 620 Jul 19 08:03 setup.cfg -rwxr-xr-x 1 root root 2937 Jul 19 08:03 setup.py drwxr-xr-x 4 root root 4096 Jul 19 08:03 src drwxr-xr-x 5 root root 4096 Jul 19 08:03 test ``` so I'm a bit baffled as to how it ended up that way. It would be nice for the next sdist to have more normal permissions in the tarball. Maybe it's some configuration on the system where the release script is typically run? Answers: username_1: I'll check this out, we definitely should ensure that world-writeable is not the default for our sdist in our new release protocol. username_2: @username_3 is this resolved now or will be by your modifications to releasing (https://github.com/urllib3/urllib3/pull/1508)? username_3: This will in theory fix that issue by removing the human element of releases unless there's some special transformation we need to do to the tarball after building it. Does the default sdist build have world-writable perms? username_3: Closed via #1508 Status: Issue closed
simple-icons/simple-icons
586570565
Title: Icon request: Loom Question: username_0: **Name:** Loom **Website:** loom.com **Official resources for icon and color:** https://www.loom.com/press #fd5e60 Answers: username_1: **Alexa rank:** [~5.2k](https://www.alexa.com/siteinfo/loom.com) username_1: @username_0, do you want to give this one a try yourself? username_1: Closed by #2951 Status: Issue closed
Shavonsky/Credit-Card-Number-Validator
679759189
Title: Карты American Express (AMEX) не валидны Question: username_0: Проверено три карты American Express (AMEX): - 346762835865054: Result is FAIL - 377354144638224: Result is FAIL - 347205818711977: Result is FAIL Окружение: - Windows 7, 64-bit - Java version "11.0.8" 2020-07-14
atom/apm
190144296
Title: apm temp directory gets deleted before the installation can finish Question: username_0: We are running into big issues in [Hydrogen](https://github.com/nteract/hydrogen/) because the install directory get's deleted before the package is fully installed. We'd really appreciate your help. Here is the full log: [hydrogen-install.log.zip](https://github.com/nteract/hydrogen/files/596086/hydrogen-install.log.zip) https://github.com/nteract/hydrogen/issues/506 https://github.com/nteract/hydrogen/issues/507 /cc @pingshunhuangalex @asmotrich Answers: username_0: `apm` could use the Atom package directory instead of a temp directory to solve this issue.
kiali/kiali
394676464
Title: Valid Virtual Service Definition Fails Config Validation Question: username_0: **Describe the bug** It appears that Kiali is failing to validate a virtual service. I modified the virtual service from the tutorial for accessing google over https. It appears to be working from the istio side of things just fine but is failing the validation in the UI. I'm also not sure if this is the reason it is not showing the graph view as well. https://istio.io/docs/tasks/traffic-management/egress/#make-requests-to-the-external-services ![virtual service error](https://user-images.githubusercontent.com/7543162/50521882-90f2ab80-0a8d-11e9-951e-878c734cab72.png) The error displayed is: *VirtualService doesn't define any route protocol.* **Versions used** Kiali: 1.0 Istio: 1.0.0 Kubernetes flavour and version: GKE v1.10.9-gke.5 **To Reproduce** Here is the virtual service yaml file. when using kubectl describe, it outputs what you would expect, but when displayed in the kiali UI it cuts off after host. `apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: avalara spec: hosts: - sandbox-rest.avatax.com tls: - match: - port: 443 sni_hosts: - sandbox-rest.avatax.com route: - destination: host: sandbox-rest.avatax.com port: number: 443 weight: 100` ` **Expected behavior** No validation error and the entire virtual service definition is shown. Answers: username_0: So I have upgrade to: kiali-ui 0.10.1 (e58567cf961e281751b46582499c1e71ac619e97) kiali v0.10.1 (cc2251b69485e23793ba0b0bfa1c4198ede9653b) Components Istio 1.0.5 Prometheus 2.3.1 Kubernetes v1.10.9-gke.5 And now it shows the full definition but now has the error: *Destination weight on route doesn't have a valid service* ![kiali](https://user-images.githubusercontent.com/7543162/50523307-001fce00-0a95-11e9-89bb-27185777b1af.png) username_1: Hi @username_0, is there a DestinationRule that matches the destination section (specifically the host) as defined in this VirtualService? Are you trying to access something outside of the mesh or inside the mesh? @xeviknal Is this particular validation message documented on Kiali website? I searched the validations tables and didn't spot this one - perhaps the message is slightly different? username_0: @username_1 is that an istio requirement or a kiali one? Because from the istio side of things everything is working just fine. Here is the guide I mentioned I followed: https://istio.io/docs/tasks/traffic-management/egress/#make-requests-to-the-external-services username_2: @username_0 do you have both the ServiceEntry and the VirtualService defined in the same namespace? username_3: I think Istio does not require a DR when there is only one possible destination in a VS, so that the implicit weight is 100%. I think what @username_0 is seeing may be a different incarnation of https://issues.jboss.org/browse/KIALI-2147 username_0: @username_3 yes, that could be it. I have mTLS disabled at the moment. And yes, they are both in the same namespace. username_4: This was fixed in PR #789, it's a result of not having any workloads that would trigger the serviceEntry lookup.
healthlocker/oxleas-adhd
270275039
Title: Copy Additions to Get Support Page Question: username_0: - [ ] **Greenlights** - They have has secured funding for a further 5 years. They offer up to 12 sessions of in-house behavioural support to families of children with disabilities, including ADHD. We can refer or families can self refer. Pete and Toni are due to do a one off talk to parents on "Top 10 tips for managing ADHD". Pete will send me the details. They no longer run the ADHD support group. Contact details : <EMAIL> - [ ] **Expert patients - Supporting parents course** : A 6 week course for parents of a child with a disability, including ADHD. Contact : 0208 921 5528 or <EMAIL> - [ ] **Greenwich Family Action (Talking point plus)** : An emotional & well being service providing counselling & therapeutic support for YP aged 13-19. Family work is no longer offered. Contact : 02088539065 or <EMAIL> - [ ] **Greenwich Parent Voice** : Set up and managed by parents in Greenwich who have a child with a disability, including ADHD. Contact : www.greenwichparentvoice.com **Greenwich Youth Services** : - Young Greenwich Youth Service - Lots of information for YP in Greenwich. Contact : young-greenwich.org.uk - Youth services at The Point : Help and advice for YP. Contact : royalgreenwich.gov.uk/thepoint **Early Help Parenting Support** : - Families First : Practical support is offered by family support workers to families who meet the following criteria : unemployment, children not in education, housing issues, parental health issues, DV, debt, anti-social behaviour - Parenting 4 parents : A 7 week positive parenting course for parents of children aged 5-13. - About boys : 4 week parent course about parenting boys in preschool and primary school. Run in children centres. - Freedom programme : Parenting course for female survivors of DV. - Baby & me parenting course : For vulnerable mothers from pregnancy to babies aged 1. Contact : 0208 921 4590 (Duty line). For a referral form contact : 0208 921 6921. For further info. - <EMAIL> - [ ] Add a link to care team in the get support page? Under Greenwich ADHD service could we add some text that says message my care team and a hyperlink? Answers: username_0: Mailto links not working on any of these... ![screenshot 2018-01-19 09 34 31](https://user-images.githubusercontent.com/24604903/35144270-fb83aa4e-fcfb-11e7-9402-29aa93776716.png) username_0: Hopefully I've got it this time! Status: Issue closed
pgbackrest/pgbackrest
450282050
Title: Question: when will V 2.14 make it to PGDG? Question: username_0: I am using Ubuntu 16.04 LTS and after updating cache (I have PGDG repo installed), I still only see version 2.13. Status: Issue closed Answers: username_1: Looks like it just hit PGDG. We don't have any control over packaging so if you have questions in the future you should send them to the pgsql-pkg-debian list.
gocd/gocd
455044462
Title: Apply secret config rules on environment config create or update Question: username_0: ```xml <secretConfig id="production_vault" pluginId="vault_based_plugin"> <description>All secrets for production</description> <configuration>...</configuration> <rules> <allow action="refer" type="environment">production_env</allow> </rules> </secretConfig> ... <environments> <environment name="can_not_refer_secret_config_production_vault"> <environmentvariables> <variable name="SSH_KEY"> <value>{{SECRET:[test][SSH_KEY]}}</value> </variable> </environmentvariables> <agents> <physical uuid="9ed1dbd5-3bfc-4b30-a9b3-0390bdbd2f8b" /> </agents> <pipelines> <pipeline name="deploy" /> </pipelines> </environment> </environments> ``` </details> Answers: username_1: @username_0 where do you think we can add this validation? username_0: ```java public boolean isValid(CruiseConfig preprocessedConfig) { EnvironmentConfig config = preprocessedConfig.getEnvironments().find(this.environmentConfig.name()); try { rulesService.validateSecretConfigReferences(preprocessedPipelineConfig); } catch (RulesViolationException e) { result.unprocessableEntity(e.getMessage()); } boolean isValid = config.validateTree(ConfigSaveValidationContext.forChain(preprocessedConfig), preprocessedConfig) && result.isSuccessful(); if (!isValid) { String allErrors = new AllConfigErrors(preprocessedConfig.getAllErrors()).asString(); result.unprocessableEntity(LocalizedMessage.composite(actionFailed, allErrors)); } return isValid; } ``` username_1: @bhupendra I am just wondering if we should add this validation. Few of my concerns are, 1. With this validation a pipeline can turn invalid on an update even if there is no change to the config, this can happen if a rule has changed. 2. Since this validation is done in a command, the behaviour would differ between API and PaC. username_0: Agreed, that there will be a difference between pipeline config through API and PaC. For PaC this validation can be done as part of the preflight check. We can add validation on config repo parse as well so that behavior will be similar in both the cases. @username_2 , @ketan - WDYT? username_2: But that is on edit of the secret config. Right? Won't that fail? Remember that we also have operations such as "Move pipeline to group X" (in Admin -> Pipelines). That has the ability to make the config invalid, if a pipeline moves to a group in which a secret used in the pipeline is invalid. <hr> I'm not entirely clear which rules are evaluated at config save time and which rules are evaluated at runtime. I agree with @username_1 that we should decide and be consistent. Even between APIs and PaC. username_2: Ah, I had missed https://github.com/gocd/gocd/issues/6415 username_3: Since the motivation behind this proposal is for faster feedback while using secrets in create/update of any entity, there are 3 input errors possible - Provide invalid secret config id or Invalid secret key or use a secret in entity where the rule prohibits usage - validating only rules may still leave the UX incomplete username_0: Agreed, Ideally, the UI should smart enough to provide the suggestions about available secret keys for the selected secret config as mentioned in https://github.com/gocd/gocd/issues/5836 PS: This is also valid for environment config. username_2: Ok, with that expectation, I understand why that secret config save should not fail. So, if the case mentioned in https://github.com/gocd/gocd/issues/6369#issuecomment-498348913 happens, then ad admin will be able to save the secret config change. However, next pipeline run might fail (at runtime). If a pipeline admin tries to edit that pipeline, then it won't be allowed to be edited till the secret usage (which is now invalid) is removed. Right? What happens when the server is restarted? Or if the XML is edited directly. If a full config save validates this condition, then the server won't come up. Am I right in saying that you're planning to add this verification only in pipeline (or env) update API call? username_1: Discussed this with @username_0 and @username_2. This validation is required for quicker feedback to users over runtime failures. This needs to be added when we update the config through API, config_xml, UI and PaC to keep it consitent. We need not wait on this to live with Secrets Management. We can introduce this validation or come up with a better UX in subsequent releases.
dotnet/orleans
84193261
Title: Minimize/eliminate the usage of code generation in Orleans Question: username_0: Orleans uses code generation in a number of places. Our goal is to minimize the usage of code generation in Orleans and maybe even completely eliminate it. The reason to minimize code generation usage are: 1) The code that generates code is hard to maintain. 2) Generated code is harder to test, harder to do dependency injection. 3) "Developers don't like generated code". That's the feedback we got from some of your customers. Generated code appears as "magick" to some, harder to follow. 4) Eliminating code generation will make Orleans more portable. For example, it will make posting Orleans to coreclr easier (#368), since coreclr does not support code dom like full .NET. Places where we use code gen today in Orleans: -[ ] Concrete Grain factories. The work is already in progress to eliminate those (#469). -[ ] State classes. The work is already in progress to eliminate those (#465). -[ ] Concrete Grain References (what is usually known in RPC terminology as stubs). -[ ] Grain invokers (what is usually known in RPC terminology as skeletons). -[ ] Orleans auto generated serializers. Answers: username_1: Now that we switched to the Roslyn-based codegen, do we need to do anything else here? I don't see a proposal for an alternative for the last three items on the list. username_0: I would still keep it open, as (if everyone does indeed agree on the merit of Minimize/eliminate the usage of code generation in Orleans) it serves as an argument against: https://github.com/dotnet/orleans/pull/965#issuecomment-162415524. As for concrete proposals for the last 3 items: I did propose in the past the idea of dynamic dispatch object for "Concrete Grain References". I don't have the implementation to backup that proposal though. username_0: I think we can close this issue. For the first 3 points we indeed eliminated the usage of code generation in Orleans. For the last 3 points instead we went with the path of **improving the code generation in Orleans, via Roslyn-based codegen**. Status: Issue closed
neomjs/neo
524009402
Title: Discussion: Nodejs & GraphQL based middleware Question: username_0: Max (elmasse) brought up this topic: We could set up a separate repository to create a neo.mjs middleware using nodejs & GraphQL. Take a look at: https://graphql.org/graphql-js/ https://medium.com/codingthesmartway-com-blog/creating-a-graphql-server-with-node-js-and-express-f6dddc5320e1 This is an epic and probably involves a lot of work. In short: neo.mjs should have a socket connection to the node server, probably using a custom API. Schemas defined in the middleware could optionally automatically create models on the client side as well as the other way around. Changes to the data of a middleware schema should automatically update the store data of a bound client-side store as well as the other way around. Would help for testing buffering of remotely loaded grids & tables. I could definitely use feedback on this one!
humazed/google_map_location_picker
855920474
Title: google_map_location_picker >=4.1.3 is forbidden. Question: username_0: This problem occurred after an update. Is there a way to solve it Because google_map_location_picker >=4.1.3 depends on geolocator ^6.1.14 and gofootball depends on geolocator ^7.0.1, google_map_location_picker >=4.1.3 is forbidden.
habitat-sh/builder
334738244
Title: Upstream depot tweaks Question: username_0: We've gotten feedback from a few customers that our current implementation of upstream depot syncing is sub-optimal. Some tweaks to improve it are: - [ ] Allow the calling process to optionally block until the upstream sync is finished, rather than doing it asynchronously in the background. This could be accomplished with a new command line flag to `hab pkg install`. - [ ] Allow any version of any package to sync, not just the latest version. - [ ] For the background async installation, rather than doing it in a 60 second loop, do it on demand. Status: Issue closed Answers: username_1: Tracking upstream issues and replacement plan in https://github.com/habitat-sh/on-prem-builder/issues/105
cloudfoundry/cf-mysql-release
110511659
Title: p-mysql broker should expose both proxies for HA Question: username_0: The cf-mysql release deploys multiple proxy servers, for HA purpose. But the cloudfoundry broker exposes only one, exposing the cf app to db access failure during bosh deployment (see below) Broker should expose the multiple proxy ips, so that application / buildpacks can leverage the proxy HA ``` +----------------------+---------+--------------------+-----------------+ | Job/index | State | Resource Pool | IPs | +----------------------+---------+--------------------+-----------------+ | cf-mysql-broker_z1/0 | running | cf-mysql-broker_z1 | 192.168.100.150 | | cf-mysql-broker_z2/0 | running | cf-mysql-broker_z2 | 192.168.100.160 | | mysql_z1/0 | running | mysql_z1 | 192.168.100.155 | | mysql_z2/0 | running | mysql_z2 | 192.168.100.165 | | mysql_z3/0 | running | mysql_z3 | 192.168.100.175 | | proxy_z1/0 | running | proxy_z1 | 192.168.100.156 | | proxy_z2/0 | running | proxy_z2 | 192.168.100.166 | +----------------------+---------+--------------------+-----------------+ ``` ``` yaml "p-mysql": [ { "credentials": { "hostname": "192.168.100.156", "jdbcUrl": "jdbc:mysql://192.168.100.156:3306/cf_c49f7310_121a_44a2_a0e6_315ea501b504?user=k6eGjTLmhoL7kx07\u0026password=<PASSWORD>", "name": "cf_c49f7310_121a_44a2_a0e6_315ea501b504", "password": "<PASSWORD>", "port": 3306, "uri": "mysql://k6eGjTLmhoL7kx07:[email protected]:3306/cf_c49f7310_121a_44a2_a0e6_315ea501b504?reconnect=true", "username": "k6eGjTLmhoL7kx07" }, "label": "p-mysql", "name": "testmysql", "plan": "100mb", "tags": [ "mysql" ] } ``` Answers: username_1: Hi @username_0, We usually solve this problem by suggesting that the Operator set up an external load balancer that abstracts the connection between the two proxies. The jump point for more information about this is on the [README.md](https://github.com/cloudfoundry/cf-mysql-release/blob/master/docs/proxy.md#configuring-load-balancer). I understand that this is a feature request to supply multiple hostnames. If provided, in theory clients could round-robin between them when necessary. However, due to the way in which we've set expectations around the HA-ness of the system, this is not a good choice. The proxy is very aggressive in cutting connections to the DB if a MySQL server goes bad. The agreement between the proxy and the client is that if the client re-connects, the proxy will route the new connection to a healthy server. As described in the documentation, the two proxies do not (yet) sync on choosing the a new, healthy server. If clients randomly choose between the two proxies, they may end up connecting to different MySQL servers. This isn't terrifically bad, but it does introduce the possibility of [deadlocks](http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_deadlock). For that reason, I'm inclined to reject your suggestion (for now). If and when we make it so that clients can indiscriminately connect to various proxy servers, we'll look to see if it's possible to provide multiple hostnames to clients. If that's OK with you, go ahead and close this issue. I'll keep the Tracker story in the icebox for execution in the future as a feature request. -- <NAME> Product Manager Pivotal Software, Inc. username_0: hello @username_1 thks for your quick feedback. I missed the loadbalancer section in the README, our concern being the SPOF on the single proxy IP. Agreed there is no point in propagating the different proxies ip to the service binding, provided there is a HA mechanism on a give IP for jdbc access. Configuring an external loadbalancer out of bosh might be quite cumbersome. Maybe a keepalived enable haproxy mechanism could be included in the proxy, or in a new job in the release like https://github.com/emc-cloudfoundry/haproxy-boshrelease ? Status: Issue closed username_1: @username_0, It turns out that any solution we offer in software just pushes the single point of failure exposure up one level. At some point, there needs to be a durable full-state-failover mechanism such as a HW load balancer or Amazon ELB. In the future, we intend to solve this problem in a very different way; we plan to investigate those solutions some time in the next several months. I'm going to close this ticket for now, as we will address the issue in a different way in the future. But thank you very much for opening it, be reassure that this is on our radar. -- <NAME> Product Manager Pivotal Software, Inc. username_0: ok , thanks for your clean explanations and insight
sparklemotion/nokogiri
1158759851
Title: [bug] `node.replace` method reorders the document position automatically if nesting a tag with the same tag Question: username_0: `node.replace` method reorders the document position automatically if nesting a tag with the same tag (see example below) ```ruby #! /usr/bin/env ruby require 'nokogiri' require 'minitest/autorun' class Test < MiniTest::Spec describe "Node#css" do it "should replace top level <div> with <p> tag" do test_value = '<div><p>Hello</p></div>' doc = Nokogiri::HTML::DocumentFragment.parse(test_value) doc.search('div').each do |node| node.replace("<p>#{node.children}</p>") end result_result = doc.to_html expected = '<p><p>Hello</p></p>' assert_equal expected, result_result end end end ``` Output ``` 1) Failure: Node#css#test_0001_should replace top level <div> with <p> tag [test_nokogiri.rb:18]: Expected: "<p><p>Hello</p></p>" Actual: "<p></p><p>Hello</p>" ``` It should output like the expected result but ending up replaced the `div` with `p` tag but added it to the beginning 😢 **Expected behavior** `node.replace` should replace the node `<div>` with `<p>` without changing the order of the position. **Environment** Environment doesn't seem to matter but ruby 3.0.3 nokogiri 1.13.2 Answers: username_1: If you use a tag combination that's valid then this approach works as expected: ```ruby class Test < MiniTest::Spec describe "Node#replace" do it "should replace top level <div> with <p> tag" do test_value = "<div><em>Hello</em></div>" doc = Nokogiri::HTML::DocumentFragment.parse(test_value) doc.search("div").each do |node| node.replace("<p>#{node.children}</p>") end actual_result = doc.to_html expected_result = "<p><em>Hello</em></p>" assert_equal expected_result, actual_result end end end ``` You have a few options that would modify the DOM tree structure directly, if you like. For example, ```ruby doc.search("div").each do |node| node.name = "p" end ``` or ```ruby doc.search("div").each do |node| node.replace(node.children.wrap("<p></p>").first) end ``` Does any of that help? username_0: @username_1 Ohhhhh I see, thank you so much for your explanation and the details 🙏🏼 It's very helpful and makes sense! 🙇🏼‍♂️ Status: Issue closed
bigskysoftware/htmx
799241329
Title: Doesn't load open posts when returning back - Load More Question: username_0: ### Describe the bug Hello, Load More stopped working for me. Cond I load posts in the Blog section, and then I go inside the news. And I press go back in the browser, All open pages with load through load more are reset. https://github.com/putyourlightson/craft-sprig/issues/92#issue-797045365 ### Screenshots If applicable, add screenshots to help explain your problem. https://www.youtube.com/watch?v=h81CdJ2YObs&feature=youtu.be ### Versions - Plugin version: 1.3.2 - Craft version: 3.6.2 Answers: username_1: @username_2 :point_up: username_2: Here is some reproducible code that assumes that page is loaded at the URL `/htmx` and uses Twig templating to get the `offset` from the query string. ```html {% set offset = offset ?? 0 %} <p>Title {{ offset }}</p> <button hx-get="/htmx?offset={{ offset + 1 }}" hx-push-url="/htmx?offset={{ offset + 1 }}" hx-target="this" hx-swap="outerHTML">Load More</button> ``` After a few clicks of the button, using the browser's back button is inconsistent. username_0: I cannot understand how to add correctly to my code `{% set offset = offset ?? 0 %} {% set entryQuery = craft.entries.offset(offset).section(section).limit(limit) %} {% set entries = entryQuery.all() %} {% for entry in entries %} <h3 class="uk-card-title uk-margin-small uk-text-bold"><a href="{{ entry.url }}">{{ entry.title }}</a></h3> <p class="uk-margin-small">{{ entry.leadIn|striptags|slice(0, 240) }}</p> {% endfor %} {# do sprig.pushUrl('/blog?offset=' ~ offset) #} {% if entryQuery.count() > offset + entries|length %} <button class="uk-button uk-button-default uk-button-large uk-border-rounded uk-align-center" sprig s-val:offset="{{ offset + limit }}" s-target="this" s-swap="outerHTML">{{ 'Load More'|t }}</button> {% endif %} ` username_1: Thanks ben. I'm going to be looking at history this weekend. Status: Issue closed username_2: going to address the history issue in https://github.com/bigskysoftware/htmx/issues/368.
janikvonrotz/issue-manager
75948599
Title: Listen Export CSV Question: username_0: In MainView statische Methode erstellen: public static void exportData(List<T> list){} In der Methode Liste CSV erstellen, Dialog zum Speichern öffnen Answers: username_1: FilterList to CSV username_1: Export für ProjektView erstellt Status: Issue closed
databricks/koalas
896536258
Title: Explode and json_normalize Question: username_0: Hi I use pandas to normalize nested JSON files. This use a lot of ram so I well try koalas. The code that I use in pandas are. ``` df.explode(" ").reset_index(drop=True) json_struct = json.loads(df.to_json(force_ascii=False, orient="records")) df = pd.json_normalize(json_struct) ``` I dont find json_normalize in koalas. How can I the same with koalas?
quarkusio/quarkus
523108453
Title: %test.quarkus.datasource.url does not have effect when run @QuarkusTest test Question: username_0: **Describe the bug** %test.quarkus.datasource.url does not have effect when run @QuarkusTest test. https://stackoverflow.com/questions/58841384/quarkus-reading-application-properties-from-the-quarkustest **Expected behavior** in test/resources/application.properties %test.quarkus.datasource.url=/my-service/api/v2 `@QuarkusTest @Tag("integration") public class MyResourceTest { @Test public void testMyEndpoint() { given() .when().get("/my-service/api/v2/init") .then() .statusCode(200) .body(is("{}")); }` **Actual behavior** `@QuarkusTest @Tag("integration") public class MyResourceTest { @Test public void testMyEndpoint() { given() .when().get("/init") .then() .statusCode(200) .body(is("{}")); }`` **To Reproduce** Steps to reproduce the behavior: 1. in test/resources/application.properties %test.quarkus.datasource.url=/my-service/api/v2 2. `@QuarkusTest @Tag("integration") public class MyResourceTest { @Test public void testMyEndpoint() { given() .when().get("/my-service/api/v2/init") .then() .statusCode(200) .body(is("{}")); }` -- latest version of now. Answers: username_1: Restassued reads the value of quarkus.http.root-path, so you don't have to modify it in your tests. Status: Issue closed
blockonomics/prestashop-plugin
266871411
Title: nothing loads on validation page after clicking pay with bitcoin button Question: username_0: HI installed the module fine on prestashop 1.6 but on clicking the pay with bitcoin button on checkout it loads the page xxxx/module/username_1/validation but nothing happens, just an empty page except for the website headers and footers. There is nothing there for the customer to make payment In the backend it records the order as awaiting payment but obviously no payment can be made. Any help would be appreciated. Thanks Answers: username_1: Hi @username_0 . Thanks a lot for trying out our prestashop module. Can you temporarily enable our module on your store, so that we can debug the issue. Also let me know your emailid, I can invite you to our slack channel, where we can coordinate to fix this faster username_0: Hi thanks for the reply, my email is <EMAIL> I will enable the module again. Sent from Mailbird [http://www.getmailbird.com/?utm_source=Mailbird&utm_medium=email&utm_campaign=sent-from-mailbird] username_1: Was a problem with theme. Fixed Status: Issue closed
pau101/Wings
565105544
Title: bauble charm (and any) aren't in correct position when flying Question: username_0: wing 1.12.2 1.1.6 botania r1.10-363 ![2020-02-14_12 45 48](https://user-images.githubusercontent.com/31458903/74502427-71843f80-4f28-11ea-9173-f0bddfea6b8e.png) ![2020-02-14_12 45 55](https://user-images.githubusercontent.com/31458903/74502428-734e0300-4f28-11ea-9954-426f23807c1b.png) ![2020-02-14_12 46 06](https://user-images.githubusercontent.com/31458903/74502431-75b05d00-4f28-11ea-929e-e481e41525e7.png) ![2020-02-14_12 46 12](https://user-images.githubusercontent.com/31458903/74502433-7648f380-4f28-11ea-9d9d-c95c4dee4bb8.png) ![2020-02-14_12 46 27](https://user-images.githubusercontent.com/31458903/74502436-7812b700-4f28-11ea-8aba-16b6ec011d6e.png) ![2020-02-14_12 46 39](https://user-images.githubusercontent.com/31458903/74502437-7943e400-4f28-11ea-9ffa-79c47cea1b57.png) ![2020-02-14_12 47 00](https://user-images.githubusercontent.com/31458903/74502441-79dc7a80-4f28-11ea-8a50-8c3aa2e8e2f7.png) ![2020-02-14_12 47 08](https://user-images.githubusercontent.com/31458903/74502445-7b0da780-4f28-11ea-8c63-26db61f2a451.png) ![2020-02-14_12 52 36](https://user-images.githubusercontent.com/31458903/74502658-274f8e00-4f29-11ea-8752-a9e51b4061db.png) ![2020-02-14_12 52 52](https://user-images.githubusercontent.com/31458903/74502661-29b1e800-4f29-11ea-908c-46a4b8179b69.png) Answers: username_1: Just personal opinion, but botania did not actually make the bauble render with 'relative position' on player. Instead they make them appear on a fixed place according to player orientation. Thus when this mod (Wings) change the player head position, the baubles would show in an incorrect and sometime funky location (which was actually the position of the player 'head' as the program reads it but not visually)... Looks might be hard to fix besides using some core modding...? Status: Issue closed username_2: Yep, this isn't an error with my mod. You see this from Botania if you fly with the vanilla elytra. username_0: What a pity. It has been months since I realized that vazkii had already dropped support for Botania 1.12.2 :(
National-COVID-Cohort-Collaborative/operations
606665082
Title: create onboarding documents Question: username_0: We will need a "package" of governance and phenotype/data egress info to send to sites. A draft should be made available by Tuesday. It should include the NCATS Data Transfer Agreement and IRB, as well as phenotype and data pull info, and finally contact info for assistance. Answers: username_0: @empfff also need Chris, Christine username_1: https://docs.google.com/document/d/1kQBkibubdmuy4fy48xaTTX5sGP4Q--zq7Na7UUGNY_I/edit username_0: looks like another one was already created here: https://docs.google.com/document/d/1dz_L_KNB1dhRNeCOxw0VEa1I3fuHOw7lgv_-fB5xSUY/edit i have been helping to edit this one
stackblitz/core
536676392
Title: Can't install @fireflysemantics/slice Question: username_0: Hi - I'm still getting installation errors on this package a lot - especially after it's published. It installs fine in normal angular projects. https://www.npmjs.com/package/@username_0/slice Answers: username_0: Tested it again and now - a day later it installs fine. NPM throttling issue? username_0: Oooops - never mind - at first it appears to go well, and then a few minutes later I get the message `@username_0/slice` and it offers to install it, but can't. username_0: I have had intermittent success. Sometimes it works sometimes not. And the version number is the same. Today it is not working. It says it's installed, and then it says it's not. This is a screenshot: ![Screenshot from 2019-12-22 12-17-04](https://user-images.githubusercontent.com/25155787/71325653-5dd2d480-24b5-11ea-9fd7-673599836862.png) Status: Issue closed username_1: Thanks for submitting the issue! We have experienced several problems with npm cache a while ago but this should be resolved already. Closing, but please reopen if you'll still experience it!
silverstripe/silverstripe-asset-admin
490056244
Title: Reorder fields in the place file form Question: username_0: The order of the presented fields seems slightly odd to me for placing a file, they aren't prioritized in order of importance or in a logical order for the person filling out the form. For instance, ALT text is normally only needed if there is no Image Caption, and Title Attribute generally isn't required if Caption and Alt exist and have done enough of an explanation. Image presentation is generally the first interest of the admin and fits better to be closer to the image preview. [As per style guide designs](https://projects.invisionapp.com/dsm/silver-stripe/silver-stripe/asset/components/5d70ab0cd81b4b445753d31b). Current order fields: 1. Image preview 2. Alternative text 3. Title attribute 4. Image caption 5. Image alignment 6. Image dimensions Suggested new order: 1. Image preview 2. Image alignment 3. Image dimensions 4. Image caption 5. Alternative text 6. Title attribute Answers: username_1: Note, #983 covers further work in this area of asset admin Status: Issue closed
pgbackrest/pgbackrest
302088855
Title: Question: Backup Archived wal logs Question: username_0: Please provide the following information when submitting an issue (feature requests or general comments can skip this): 1. pgBackRest version: 2.0 2. PostgreSQL version: 9.6.6 3. Operating system/version - if you have more than one server (for example, a database server, a repository host server, one or more standbys), please specify each: RHAT 7.2 4. Did you install pgBackRest from source or from a package? Package 5. Please attach the following as applicable: - `pgbackrest.conf` file(s) - `postgresql.conf` settings applicable to pgBackRest (`archive_command`, `archive_mode`, `listen_addresses`, `max_wal_senders`, `wal_level`, `port`) - errors in the postgresql log file before or during the time you experienced the issue - log file in `/var/log/pgbackrest` for the commands run (e.g. `/var/log/pgbackrest/mystanza_backup.log`) 7. Describe the issue: I have a space problem with the number of archived wal logs being created. Is it possible to back up archived wal logs only using pgbackrest? If so how would I go about doing this as the backup type options are only full,diff and incr? Thank you in advance Answers: username_1: It's possible to archive only WAL using the `archive_command`, but expiration is tied to backups so they would accumulate forever in the repository. We considered implementing archive-only expiration, but honestly this is the first time it has ever come up. Without backups, WAL have limited usefulness. Why not create backups as well? If you are worried about space, store them in S3. pgBackRest supports S3 storage (and encryption) natively. username_0: thank you for your response. are there any plans to implement archive-only expiration in a future release? In our case its not practical to run a full backup every time our archive destination reaches a threshold. the cluster size is many terrabytes username_1: This feature is not a priority at this time. That may change in the future, but for now we consider it to be an edge case. There are many features with broader applicability that are higher priority. username_0: Thank you. Status: Issue closed
reallyenglish/ansible-role-editors
201196320
Title: Can't find vim--no_x11 on OpenBSD 6.0 Question: username_0: ```yaml editors_to_add: ['vim--no_x11'] ``` ``` failed: [vpn3.jp.reallyenglish.com] (item=vim--no_x11) => {"failed": true, "item": "vim--no_x11", "msg": "Error from http://ftp.openbsd.org/pub/OpenBSD/6.0/packages/amd64/ ftp: ftp.openbsd.org: no address associated with name http://ftp.openbsd.org/pub/OpenBSD/6.0/packages/amd64/ is empty Can't find vim--no_x11 "} ``` Answers: username_1: the error message does not look like an issue in the role. make sure you can resolve the DNS name on the machine. Status: Issue closed
StyraHem/ShellyForHASS
527673391
Title: Cannot see firmware information Question: username_0: Running Home Assistant 0.102.1 on hassbian. I cannot see firmware information for any shelly device in the lovelace frontend. I used the following code for monstercard: ``` card: show_header_toggle: false title: Shelly status type: entities filter: include: - entity_id: '*shelly*' type: 'custom:monster-card' ``` What am I doing wrong? Answers: username_1: Do you have additional_information enabled ? Do you have restricted login enabled on your device, you also have to specify user and password in shelly config. username_0: @username_1 thank you for your reply. In lovelace config or in home-assistant (configuration.yaml)? username_1: Shelly section in config.yaml, see readme for the ShellyForHass component. username_0: I do have the following config in my configuration.yaml: ``` shelly: discovery: true sensors: - all additional_information: true upgrade_switch: true ``` I can see all options for the shelly but firmware upgrade. username_1: Can you please send a picture username_0: I figured out why I couldn't see the button to update... All shellies were already up to date, I didn't imagine the button wouldn't show up if no update was available but it totally makes sense. Thank you very much for helping! username_1: Yes, that is how it work. Maybe I should write that in readme... username_2: @username_0 , issue is then fixed ? Can you please close it ? Thank you Simone username_0: Yes thanks! Issue resolved :) Status: Issue closed
mesonbuild/meson
252138378
Title: Enabling code coverage on 0.42.0 with lcov installed causes a ninja error Question: username_0: Seems like coverage analysis (`b_coverage`) is broken on 0.42.0. When `lcov`/`genhtml` are installed (whether `gcovr` is installed or not), enabling this option always results in an error: ``` Meson encountered an error: Multiple producers for Ninja target "coverage-html". Please rename your targets. ``` Confirmed on both MacOS and Fedora (using 0.42.0 RPM from Rawhide); 0.41.2 on Fedora works fine. Detailed steps to reproduce: 1. Create `meson.build`: ``` project('mwe', 'c') executable('mwe', 'mwe.c') ``` 2. Create `mwe.c`: ``` int main(int argc, char * argv[]) { return 0; } ``` 3. Confirm `lcov` and `genhtml` are installed. (Run each command with `--version`.) 4. Execute `meson _build -Db_coverage=true`, which fails with the error reported above. Answers: username_1: I think this is fixed by #2214. Status: Issue closed username_0: You're right! I built meson from git master f3812849 (which includes #2214) and it works fine. username_2: The fix for this will go into a stable release soon.
nazcasun/node-red-contrib-dashboard-sum-bars
372294755
Title: chart data is lost when redeploying flow Question: username_0: This is a feature request. I noticed that redeploying the flow containing this node made that all chart data is lost. Note that this is not happening when using the standard dashboard chart node. So the standard dashboard chart node is somehow persisting/memorizing chart data when redeploying the flow. It would be nice to have this feature also for this node.
on-api/on-api
634189800
Title: Accesses / Feasability should include time in services/disconnection Question: username_0: Time should be included in services/disconnection so that the customer does not need to wait one day for the new TL to place an order. Example: "service": "BB-100-100", "connection": "2013-10-12", "available": "NO", "disconnection": "2020-08-12 14:23:12" Answers: username_1: This one is already specified correctly if I'm not mistaken. The parameter services/available dataformat points to https://github.com/on-api/on-api/blob/master/2.4.0/common/dataformats.md#date Status: Issue closed username_0: Closing this as this will be added to v.2.5.0.
signalfx/signalfx-agent
736905186
Title: monitors/http Regex dosnt work Question: username_0: Hello all i try to implement the http monitor and check the response with a regex. but i recive everytime a regex_match = 0 on SignalFx. Here my Setings: _Originalurl removed_ ``` - type: http intervalSeconds: 120 disableHostDimensions: true extraDimensions: application: example host: 'www.example.ch' path: '/' regex: '([oO]bject|[mM]oved)' port: 443 useHTTPS: true ``` Response from the Site with a cUrl ``` <head><title>Object moved</title></head> <body><h1>Object Moved</h1>This object may be found <a HREF="/claromoto/index.asp">here</a>.</body> ``` So for this, how i should past a Regex string? i have this regexstring testet on https://regex101.com/ Kind Regards Answers: username_1: Is that response a 3xx redirect? If so, you should set `noRedirects: true` in the monitor config. Otherwise it seems ok, maybe try removing the parenthesis around the regex (they are unnecessary). username_0: Yes, you are right ``` - type: http intervalSeconds: 120 disableHostDimensions: true extraDimensions: application: example host: 'www.example.ch' path: '/' regex: '[oO]bject|[mM]oved' port: 443 useHTTPS: true noRedirects: true ``` so it is working. Thanks Status: Issue closed
textileio/react-native-sdk
428506826
Title: Discontinue peerswap requests from migration Question: username_0: @username_0 commented on [Fri Mar 15 2019](https://github.com/textileio/textile-mobile/issues/963) Seems like this has been going long enough, we should stop these checks and rest of that code probably. --- @asutula commented on [Fri Mar 15 2019](https://github.com/textileio/textile-mobile/issues/963#issuecomment-473395392) Ah cool. I already removed the rest of the migration code and must have just forgot this. --- @username_0 commented on [Tue Apr 02 2019](https://github.com/textileio/textile-mobile/issues/963#issuecomment-479272399) i don't see any migration related code left. _there are_ migration functions in the RN sdk, so moving this ticket over there<issue_closed> Status: Issue closed
hashbang/aosp-build
548092695
Title: Add Patches and system-apps Question: username_0: Hello, I try'ed to build aosp with microg so I added the packages in the config/manifest/base.xml as it is descriped here: https://github.com/microg/android_packages_apps_GmsCore/wiki/Building I used make DEVICE=sargo fetch to get the sources and jumped into the container with make shell. I applyed this patch without problems. https://github.com/microg/android_packages_apps_GmsCore/blob/master/patches/android_frameworks_base-P.patch I started the build with make DEVICE=sargo and the build failed because it cant write the new apps to the desired folder. I don't think what I used the build-system how it should be used. How can i add system-apps and apply patches? Answers: username_1: I have not done this myself and have not intention to do so. But as I understand it, you would need to follow this part of the documentation: https://github.com/microg/android_packages_apps_GmsCore/wiki/Building#integrate-gmscore-in-aosp-based-rom Did you do that? username_0: Yes. But I am not sure how to do it. Should i do it in the aosp-build folder or in the shell with make shell? username_1: `make shell` would be preferred. Please get familiar with Docker and with the layout of this project.
apache/couchdb
390748296
Title: Requests return 400 Bad Request when URL length exceeds 1460 characters Question: username_0: When using `chttpd.server_option = [{recbuf, undefined}]`, requests with URL length exceeding 1459 characters fail with `400 Bad Request` with no error in the logs. (The relevant URL length excludes protocol and userinfo). Example of failing request: ``` curl -v http://127.0.0.1:5984/_users/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ``` URL length is 1467 (1460 excluding protocol and userinfo) ## Expected Behavior Should return: ``` Host: 127.0.0.1:5984 User-Agent: curl/7.58.0 Accept: */* HTTP/1.1 404 Object Not Found Cache-Control: must-revalidate Content-Length: 41 Content-Type: application/json Date: Thu, 13 Dec 2018 13:18:50 GMT Server: CouchDB/2.3.0 (Erlang OTP/20) X-Couch-Request-ID: 0b4ad7148f X-CouchDB-Body-Time: 0 {"error":"not_found","reason":"missing"} ``` ## Current Behavior Actually returns: ``` HTTP/1.1 Host: 127.0.0.1:5984 User-Agent: curl/7.58.0 Accept: */* HTTP/1.1 400 Bad Request Connection: close Content-Length: 0 Date: Thu, 13 Dec 2018 13:15:09 GMT Server: MochiWeb/1.0 (Any of you quaids got a smint?) ``` Example of successful request: ``` curl -v http://127.0.0.1:5984/_users/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ``` URL length is 1466 (1459 excluding protocol and userinfo). ## Possible Cause It appears that, by not setting a `recbuf` value, mochiweb_socket_server sockets end up with a `buffer` size of 1460 (the user-level buffer - http://erlang.org/doc/man/inet.html), which causes this limitation (`recbuf` and `sndbuf` are indeed defaulting to much larger values). This default value of 1460 seems to be coming from [Erl inet_drv.c](https://github.com/erlang/otp/blob/56f93ad10f89e6b8d3372e45127ec9fdc3fca35b/erts/emulator/drivers/common/inet_drv.c#L923) and initialized [here](https://github.com/erlang/otp/blob/56f93ad10f89e6b8d3372e45127ec9fdc3fca35b/erts/emulator/drivers/common/inet_drv.c#L9057). ## Steps to Reproduce Install CouchDB 2.3 (or build from `master`) and run with default configuration (specifically `chttpd.server_option = [{recbuf, undefined}]`). Execute a request with a URL length (excluding protocol and authority) exceeding 1460 characters (method or endpoint appear to be irrelevant). ## Context We are using PouchDB to interact with our CouchDB database and ran into this error with view queries. PouchDB does issue a POST request instead of a GET, but only when the query string exceeds 2000 characters (see [here](https://github.com/pouchdb/pouchdb/blob/f87c57a3b3b791dbb62bd881378755ca0a195d13/packages/node_modules/pouchdb-abstract-mapreduce/src/index.js#L262)). ## Your Environment * Version used: 2.3.0, master * Using: Chrome, cURL, wget * Operating system: Ubuntu 18, Ubuntu 16 (desktop) Even if it is decided that this is desired behavior, I believe this change should at least be documented. Answers: username_1: Hi @username_0 , thanks for the report. This is definitely not intended - we'll take a closer look. See https://github.com/apache/couchdb/issues/1409 for the history here; we can't directly just revert the change because the other use case will be impacted, so this may take a bit of research to resolve sufficiently. username_1: Interesting, Running `dev/run` doesn't result in the error, but running a release `rel/couchdb/bin/couchdb` does. username_1: http://erlang.org/pipermail/erlang-questions/2011-June/059571.html https://github.com/ninenines/cowboy/issues/3 We're weighing our options now. Currently hoping improving mochiweb to pass `[{buffer, 8192}]` usefully thru one or another setting in `*.ini` or similar would be enough. The cowboy solution is going to be more reliable in the long run, but is a fairly large retrofit to mochiweb. username_2: Hi @username_0, Thank you for the bug report and excellent analysis. You're absolutely correct, `chttpd.server_option = [{recbuf, undefined}]).` ends up not setting recbuf value, which means Erlang userland buffer gets a default value of 1460 (when recbuf is set so is the userland buffer). This buffer size limit combined with a long-standing bug in Erlang's `{packet, http}` parser ends crashing the socket receive code with an `emsgsize` error. It's not possible to set the buffer size in mochiweb current, so I will make a PR request to fix that. username_2: pull request: https://github.com/mochi/mochiweb/pull/208 username_0: Hi @username_1 , @username_2 Thank you very much for the quick responses and for the additional context! The Erlang http packet parser bug was the missing piece in my understanding of why this is happening. We will be updating our configs to set a `recbuf` value for 2.3.0 installs. username_1: @username_0 Be advised that this may adversely affect your attachment performance (if you use attachments), see #1409. We are considering a 2.3.1 fix for this bug since it's rather a surprising regression. username_1: @username_2 I'm not sure that the fix in https://github.com/mochi/mochiweb/pull/208 is going to satisfy the situation in #1409. Basically, the patch as written I *think* only sets the buffer to `8192` if `recbuf` is undefined. This means if you need >8k of headers you have *no choice* but to peg `recbuf` higher, which causes the issue seen in #1409. I don't think it's fair to tradeoff a fix for this with poor attachment performance again. Ideally we shouldn't be using a fixed 8k value in mochiweb but instead calling `getopts` to figure out the OS's window size (at least initially) and use that to set `buffer`'s size. I know you had runtime reservations about the performance of inserting a `getopts` call with every socket open but we shoudl characterize this. An alternative without going back to mochiweb would be to pass in `{buffer, BIGNUM}` . I don't know if your patch to mochiweb allows passing buffer in now as part of `server_options`, does it? If so this could be a documented workaround; based on testing on Linux the default was like 50k on my system, and 50k of headers should be sufficient. But the only real fix for this (arbitrary header length + undefined recbuf) is to stop using `{packet, http}` which is a MASSIVE change to mochiweb and no one is volunteering to write it. So a tradeoff is probably our best bet here. username_1: @username_2 bump, we need to solve this, see #1843. Any comments on my proposal? username_2: @username_1 oh, sorry, dropped this one, it's already solved, just need to merge latest mochiweb master to our copy and tag it username_1: @username_2 did you read my comment above? https://github.com/apache/couchdb/issues/1810#issuecomment-448060589 Maybe we just call it good enough with 8192 for now, but I'm not sure it's sufficient. Do we want to cut a 2.3.1 with this? username_1: @username_2 can you hop on IRC briefly to finish hashing this out? I want to be sure I understand where you're talking about the OS buffer (recbuf) and the mochiweb buffer (buffer) above, so that we don't force people to have to manually specify recbuf's size just to bump buffer's size. username_1: OK, for those who are curious - @username_2 's patch to mochiweb does the right thing, it's just that above our discussion of `recbuf` vs. `buffer` as mochiweb settings isn't 100% clear. With the patch to mochiweb, and a stock CouchDB (no changes in `*.ini`), we set mochiweb's `recbuf` to undefined. Mochiweb then lets the kernel manage its buffer size itself (`recbuf` is undefined), and sets the Erlang kernel buffer to `8192`. Because the Erlang `http` packet parser requires the entire header line (URL path, cookie, anything is fair game) to be in the buffer returned from the kernel, if the request exceeds `8192` bytes, erlang/mochiweb explode, and CouchDB will return a 400. (The parser can't handle header lines split across buffers.) The workaround for this is to set your `server_options = [{buffer, 16384}]` or other suitably large size, leaving `recbuf` undefined. This way the kernel still manages the O9S buffer size automatically, and only the Erlang userland buffer is made bigger to handle that. This doesn't catch 100% of the use cases - `buffer` can only be made as large as the kernel `recbuf`, and if the kernel buffer isn't big enough for all the headers, you still fail. In that case - and in my testing, you're talking about ~50k on Linux - you have to bump `recbuf` (which, in turn, auto-bumps `buffer` to the same size). This should be very, very rare, though, on modern OSes. Unfortunately there's no easy way to make mochiweb smarter here. If `buffer < recbuf`, and the line length is > `buffer`, mochiweb gets an `emsgsize` error back and returns a 400 before Couch even gets its hands on anything. If `buffer = recbuf` and the Request-uri is still too long, Erlang fails with an `http_error` message. I think our best bet is to document this. What's left to do: 1. Write a test that bumping `[{buffer, 16384}]` for a 16k URL path is successful 1. Document the above in a useful way in the CouchDB Documentation troubleshooting section 1. Release 2.3.1, as this is very surprising for users. username_3: Is there a due date for the release of 2.3.1 as this bug is biting us currently. username_1: We are currently in the release process - need to finish acceptance testing. username_3: @username_1 Awesome! For now we implemented a workaround. Thanks for the prompt reply and keep up the good work!
song940/node-escpos
612686941
Title: Running the example and not as expected Question: username_0: I tried the basic example, but not as expected, what happen? ![IMG_0117](https://user-images.githubusercontent.com/10142496/81082968-2a518d80-8f1e-11ea-9a88-1adc58ba19ce.jpg) **escpos:** 3.0.0-alpha.3 **escpos-network:** 3.0.0-alpha.2 Answers: username_1: Paper size is 80mm?