repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
MiguelSobera/Tarjeta
560446149
Title: Implementación Interfaz Tarjeta Monedero Question: username_0: Se deberá implementar la interfaz Tarjeta Monedero, la cual será nombrada ITarjetaMonedero. Esta interfaz contará con los siguientes métodos: - Comprar: este método lo devolverá nada y recibirá por parámetro una variable float y otra String - Métodos de obtención y establecimiento de los atributos necesarios
prescottprue/redux-firestore
419899218
Title: state not updated correctly after removing more than 1 item from sub collections (with batch function) Question: username_0: using the firestore batch function for deleting multiple items from a **sub collection.** "sometimes" the state just doesn't update correctly and instead of removing all deleted items it removes just 1. firestore database updates correctly. only after refreshing i get the correct state. Answers: username_1: @username_0 Please report what versions of dependencies you are using and how you are attaching listeners and/or rendering the data. All of this is necessary to try to reproduce. username_0: **from my `packege.json`** ``` "react-redux-firebase": "^2.2.5", "redux-firestore": "^0.7.2", ``` **from my index.js** ``` import React from 'react'; import ReactDOM from 'react-dom'; import { Provider } from 'react-redux'; import { applyMiddleware, compose, createStore } from 'redux'; import thunk from 'redux-thunk'; import { reactReduxFirebase, getFirebase } from 'react-redux-firebase'; import { reduxFirestore, getFirestore } from 'redux-firestore'; import firebase from '../firebase.config'; import { Router, hashHistory } from 'react-router'; import rootReducer from './cms/root-reducers'; import Routes from '/src/routes'; const store = createStore( rootReducer, composeEnhancers( applyMiddleware( thunk.withExtraArgument({ getFirebase, getFirestore }), ), reduxFirestore(firebase), reactReduxFirebase(firebase, { userProfile: 'users', useFirestoreForProfile: true, preserveOnDelete: true, }), ) ); ReactDOM.render( <Provider store={store} > <Router history={hashHistory} routes={Routes.routesMap} /> </Provider >, $root ); ``` from my `root-reducers.js` ``` import { combineReducers } from 'redux-immutable'; import { firestoreReducer as fireStore } from 'redux-firestore'; import { firebaseReducer as firebase } from 'react-redux-firebase'; const rootReducer = combineReducers({ fireStore, firebase, }); export default rootReducer; ``` I believe this is all the relevant info, please let my know if you think anything else is necessary. did i articulate the problem well enough ? is it clear ? it is reproducible simply by trying by using the batch function to remove items form a sub collection. happens almost every time. (sometimes the first try works)
spring-projects/spring-framework
492179728
Title: Regression: Improper UTF-8 handling in MockMvc for JSON response Question: username_0: When `MediaType.APPLICATION_JSON_UTF8` was deprecated, the content type of JSON that gets sent from a `@RestController` changed to `application/json` (without charset). This breaks MockMvc tests that use `.andExpect(content().json())`. Here is a sample test: ``` @Test public void returnsTheExpectedResponse() throws Exception { mockMvc.perform(get("/")) .andExpect(status().isOk()) .andExpect(content().json("{\"name\":\"Jürgen\"}")); } ``` The test runs fine with Spring 5.1.9 but fails with Spring 5.2.0.RC2 ``` java.lang.AssertionError: name Expected: Jürgen got: Jürgen ``` You can find a sample Spring Boot project to reproduce the problem at https://github.com/username_0/mockmvc-json-utf8 There was a similar issue for `jsonPath()` (#23219). Status: Issue closed Answers: username_2: Both overloads of `ContentResultMatchers.string()` do this, too. Three other functions in that class call `getContentAsString()` with no arguments, but I'm not sure if that results in the same issue. username_3: I use getContentAsString and mockMvc seems to return my Content as ISO 8859-1. Before (with Spring Boot 2.1.9) it was proper UTF-8 When I mark my Endpoint with @Produces("application/json; charset=utf-8") it works as indented. But I dont want to go throught all my endpoints todo this. username_4: In your tests, `ContentResultMatchers#json(String)`.
DIYgod/RSSHub
457220856
Title: 请求添加经济之声广播rss Question: username_0: <!-- 请确保 [文档](https://docs.rsshub.app) 和 [issue](https://github.com/username_3/RSSHub/issues) 中没有相关内容,且源站没有提供 RSS,并按照模版提供信息 否则 issue 将被立即关闭 目前 RSS 需求滞销,如有能力请按照 [指南](https://docs.rsshub.app/joinus) 自行编写并提交 pr --> ### 源站地址http://www.radio.cn/pc-portal/sanji/zhibo_2.html?channelname=2&name=520767&title=radio# ### 需要生成什么内容? - [ ] 内容一经济之声那些年的播客 - [ ] 内容二 ### 额外描述 需要按时间排序的音频内容,可以用泛用型播客订阅。注:在荔枝订阅节目有一至两天的延迟,想要更快的抓取速度。 Answers: username_1: http://tacc.radio.cn/pcpages/searchs/livehistory?channelname=2&name=520767&callback=&start=1&rows=20&_=1560822921054 username_1: 这就尴尬了 刚写了一半 接口就返回空了 username_2: @username_1 带时间戳的吧,更新一下时间参数试试 username_1: 原网页也没了 username_0: 可能是网站问题,发现以前的内容全没了😂 username_0: @username_1 大佬有没有兴趣再搞搞,网站又回来了 username_1: @username_0 晚上回去看看吧 顺便保存一份结果 免得再没了 { "result_code": 0, "result_message": "success", "passprogram": [{ "stream_url2": "\/live\/jjzs\/201906\/nxn_20190620221130jjzs_l.m4a", "stream_url3": "\/live\/jjzs\/201906\/nxn_20190620221130jjzs_l.m4a", "stream_url1": "\/live\/jjzs\/201906\/nxn_20190620221129jjzs_h.m4a", "display_id": "1592586", "live_channel_id": "2", "stream_domain1": "cnvod.cnr.cn\/audio2018", "stream_domain2": "cnvod.cnr.cn\/audio2018", "update_time": "2019-06-20 22:11:29.0", "stream_domain3": "cnvod.cnr.cn\/audio2018", "section_id": "866f9cee-656a-4221-bf1e-1bb7fc78fba0", "ondemand_channel_display_id": "520767", "id": "520767-bdbff1dc-948a-4111-b539-1ecf3dfad5d7", "search_name": "\u90a3\u4e9b\u5e74", "broadcast_date": "2019-06-20", "search_time": "2019-06", "channel_name": "\u7ecf\u6d4e\u4e4b\u58f0", "end_time": "22:00:00", "resource_length": "3600", "name": "\u90a3\u4e9b\u5e74", "start_time": "21:00:00", "resource_size": "83.3", "order_num": "1973", "_version_": 1636899487616073728 }, { "stream_url2": "\/live\/jjzs\/201906\/nxn_20190619220835jjzs_l.m4a", "stream_url3": "\/live\/jjzs\/201906\/nxn_20190619220835jjzs_l.m4a", "stream_url1": "\/live\/jjzs\/201906\/nxn_20190619220834jjzs_h.m4a", "display_id": "1591612", "live_channel_id": "2", "stream_domain1": "cnvod.cnr.cn\/audio2018", "stream_domain2": "cnvod.cnr.cn\/audio2018", "update_time": "2019-06-19 22:08:33.0", "stream_domain3": "cnvod.cnr.cn\/audio2018", "section_id": "866f9cee-656a-4221-bf1e-1bb7fc78fba0", "ondemand_channel_display_id": "520767", "id": "520767-f12ef2ee-e676-4ca1-b219-828b1f3488b4", "search_name": "\u90a3\u4e9b\u5e74", "broadcast_date": "2019-06-19", "search_time": "2019-06", "channel_name": "\u7ecf\u6d4e\u4e4b\u58f0", "end_time": "22:00:00", "resource_length": "3660", "name": "\u90a3\u4e9b\u5e74", "start_time": "21:00:00", "resource_size": "83.3", "order_num": "1972", "_version_": 1636808875369824256 }, { "stream_url2": "\/live\/jjzs\/201906\/nxn_20190618221253jjzs_l.m4a", "stream_url3": "\/live\/jjzs\/201906\/nxn_20190618221253jjzs_l.m4a", "stream_url1": "\/live\/jjzs\/201906\/nxn_20190618221253jjzs_h.m4a", "display_id": "1590703", [Truncated] "stream_domain2": "cnvod.cnr.cn\/audio2018", "update_time": "2019-06-01 22:09:39.0", "stream_domain3": "cnvod.cnr.cn\/audio2018", "section_id": "866f9cee-656a-4221-bf1e-1bb7fc78fba0", "ondemand_channel_display_id": "520767", "id": "520767-1acbba5c-b39d-4e9c-823f-339c6f58a1a2", "search_name": "\u90a3\u4e9b\u5e74", "broadcast_date": "2019-06-01", "search_time": "2019-06", "channel_name": "\u7ecf\u6d4e\u4e4b\u58f0", "end_time": "22:00:00", "resource_length": "3660", "name": "\u90a3\u4e9b\u5e74", "start_time": "21:00:00", "resource_size": "83.3", "order_num": "1954", "_version_": 1636742802687655939 }], "total": 1093 } Status: Issue closed
bazelbuild/bazel
217558812
Title: End to end test for bazel rules are hard to write Question: username_0: In Bazel we use //src/test/shell to do end to end tests, those tests are in shell so hard to maintain and we do not expose that framework for rules writer. We could: - Clean-up our e2e support - Expose it in @bazel_tools - Expose the bazel binary as a dependency so we can write those tests. /cc @username_3 @or-shachar Answers: username_0: Hopefully we can work on our integration testing framework this quarter. /cc @aragos fyi username_1: 👏🏽 username_2: This is blocking my capacity to jettison `docker_build` into `rules_docker`... What is the workaround? Rewriting the tests to avoid this dependency? username_1: Matt, How do you think about rewriting the tests to avoid this dependency? username_2: Doesn't look like I have a choice if I want to do this prior to 0.6 getting released... :) username_1: I understand. My question was on how you're going to do this? We tried a few different methods and couldn't find a way we think is actually good. rules_scala has a bash script which isn't part of Bazel and just calls out to Bazel. This isn't a good solution but it's the least worse we could see so we're using it username_0: Docker rules does not have failure mode tests so there test does not need to embed bazel integration testing support. username_2: I ended up redefining the functions I needed, some essentially copied from the bashunit library: https://github.com/bazelbuild/rules_docker/blob/master/docker/testenv.sh#L25 username_3: Note that this is not only a problem for language rules, but for any extension that user write, even internal. username_1: @username_0 this seems like the generic relevant issue, right? What we discussed is the desire to move the skylark rules introduced [here](https://bazel-review.googlesource.com/c/15432/1) to a separate repo which people can depend on directly. Additionally we talked about having the rules extract Bazel itself and so save the (~17 seconds I saw when I tried something similar) username_0: https://github.com/bazelbuild/bazel-integration-testing not yet easy to use but launch approved :) username_1: Well done! 🙇 👏 username_4: cc/ @username_4 username_4: Moving to EngProd, as I think the logical conclusion of this thread is that bazel would take a dependency on bazel-integration-testing and use that. EngProd is the best team to make the call on the value of that. username_1: I'll add that bazel-integration-testing is somewhat dormant mainly due to no capacity on my side so if EngProd will decide this is logically something they want to do then Wix will find capacity for this repo and/or EngProd can co-maintain (if you want to be sole maintainers we can discuss that to). username_4: Yes. Greenfield development of a tool is fun. Keeping the lights on is less so. My recent interest comes from looking at a lot of Google's internal integration tests. They are a mess of different styles and techniques, and most are horribly linux specific in their use of bash-isms and tool assumptions. We need some portable ways to write out a small workspace, run bazel and test that particular targets build, fail in specific ways, or produce specific output. The worst are the failure tests, which are either brittle w.r.t. to the exact words expected in the output of a failure, or do suspect things with sed to normalize output. I had started a very rough draft of some ideas. I just externalized it here. [Better Blazel Integration Tests](https://docs.google.com/document/d/1WKQ-CQ64Y-xFUIN-tBjmjQl63OhL4vjOlkX5uEaGel8/edit#) username_1: That’s a bit unfair given I’ve been maintaining rules_scala for about 3-4 years now and it’s far from being greenfield. I think the main issue is that for us this is 2% of our overall concern while maintainers should probably be devs who can’t live without the project. One use case that we care about and wasn’t mentioned (I think) was rulesets. I want to test negative or reproducibility functionality of rules_scala inside the bazel build. I’ve read through your doc and it’s interesting at the very least. I’ll give it some more thought. Thanks for sharing! username_1: 👍🏽 it sounds like we have a lot in agreement. I’d love for Bazel team to help/co-maintain/maintain such an external framework. I think it’s valuable to take a look at google’s internal needs, Bazel’s needs, community rules needs as well as other companies with internal extensions. If you (Bazel people) find capacity and interest I think it could be super valuable in hosting an online round table to flush out needs and requirements. Probably start with a doc beforehand to make the discussion more effective. Good luck!
pfafman/meteor-photo-up
65151824
Title: Question: does this actually upload an image to the server Question: username_0: Hi I am trying to understand if this physically actually will save an image on the server and if so where... I am currently using meteor-file-collection gridfs and am looking for a means to edit images prior to saving. Is there a way to send data into something like file-collection? or is this only for physical uploads? Thanks Answers: username_1: This package does not save the image to the server. It only gets it into meteor on the client, via the callback. You then save it were you want. For example you could drop this into a form for a user preference where they can add an image and then on save on the form push that image into their profile if you wanted. Status: Issue closed
adobe/aio-cli
1137369685
Title: Migrate to oclif v2. Question: username_0: oclif v2 has been announced. The following libraries have been consolidated to `@oclif/core` and will get deprecated at some point in the future. @oclif/command @oclif/config @oclif/error @oclif/help @oclif/parser Migration guide: https://github.com/oclif/core/blob/main/MIGRATION.md Whats new: https://oclif.io/blog/2022/01/12/announcing-oclif-v2#whats-new
eggjs/egg
812496697
Title: pkg打包的egg ts项目,出现找不到加载的类:Class extends value undefined is not a constructor or null Question: username_0: ## What happens? 用pkg打包的egg ts项目,启动时控制器加载出现错误: error: Class extends value undefined is not a constructor or null ` TypeError: [egg-core] load file: D:\snapshot\ehr-node-ts\app\controller\dropdownController.js, error: Class extends value undefined is not a constructor or null at Object.<anonymous> (D:\snapshot\ehr-node-ts\app\controller\dropdownController.js) at Module._compile (pkg/prelude/bootstrap.js:1320:22) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1156:10) at Module.load (internal/modules/cjs/loader.js:984:32) at Function.Module._load (internal/modules/cjs/loader.js:877:14) at Module.require (internal/modules/cjs/loader.js:1024:19) at Module.require (pkg/prelude/bootstrap.js:1225:31) at require (internal/modules/cjs/helpers.js:72:18) at Object.loadFile (D:\snapshot\ehr-node-ts\node_modules\dinegg\node_modules\egg\node_modules\egg-core\lib\utils\index.js:27:19) at getExports (D:\snapshot\ehr-node-ts\node_modules\dinegg\node_modules\egg\node_modules\egg-core\lib\loader\file_loader.js:199:23) ` ## 复现步骤,错误日志以及相关配置 使用的egg最新版本,将常用插件(egg-mysql、egg-sequelize,egg-jwt,egg-cors等)封装了自己的上层框架。<issue_closed> Status: Issue closed
hhru/android-multimodule-plugin
914371109
Title: Plugins download links are broken Question: username_0: When I'm clicking on the links in the [README](https://github.com/hhru/android-multimodule-plugin), like the [hh-geminio](https://github.com/hhru/android-multimodule-plugin/blob/master/distr/hh-geminio.zip) link, I get the 404 GitHub page. Please, provide the links for the current plugins binaries. Answers: username_1: @username_0 , thanks for opening this one. I've decided to remove zip-artifacts from this repository because some people faced with strange issues of using plugins on OS that differ from macOS. A more reliable solution for you - build zip archives of plugins by yourself on your machine with `./gradlew :plugins:hh-geminio:buildPlugin` Gradle-task. Links will be removed from README. Thanks! Status: Issue closed
jupyterhub/jupyterhub
913552794
Title: Best practice for resource management Question: username_0: Hello! We are using JupyterHub and nbgrader on a server with 72 CPU cores, 768GB RAM and four RTX 2080 Ti. Does a best practice for resource management already exist? We are looking for a solution to limit the resources per user, e.g. 4 CPU cores, 8 GB RAM and % GPU RAM. Kind regards, username_0 Answers: username_1: you can refer to this wiki: https://zero-to-jupyterhub.readthedocs.io/en/latest/jupyterhub/customizing/user-resources.html#set-user-memory-and-cpu-guarantees-limits. You can also customize a Spawner
aspnetboilerplate/aspnetboilerplate
383260530
Title: Multi Lingual Entities Question: username_0: I am using ABP MVC5 with AngularJS template if i have Product entity ` public class Product : Entity, IMultiLingualEntity<ProductTranslation> { public virtual decimal Price { get; set; } public virtual int Stock { get; set; } public virtual ICollection<ProductTranslation> Translations { get; set; } } ` ProductTranslation ` public class ProductTranslation : Entity, IEntityTranslation<Product> { public virtual string Name { get; set; } public virtual Product Core { get; set; } public virtual int CoreId { get; set; } public virtual string Language { get; set; } } ` for Dtos, ` public class ProductCreateDto { public decimal Price { get; set; } public int Stock { get; set; } public virtual string Name { get; set; } } ` ` public class ProductUpdateDto : ProductCreateDto, IEntityDto { public int Id { get; set; } } ` questions are 1. how can i create maping between ProductCreateDto and Product i.e i want to enter name, price, it automatically add new product with ProductCreateDto.name, language is currently selected language 2. for update how can i map ProductUpdateDto to add new translation only without removing old translation, and if i want to update currently translation without removing it, i.e if i have product with en name = "Product Name" after update i want to update English translation only to be "updated product name" without removing other translations, i.e if it has Arabic, Turkish, France, English i want to update only English translation, thanks for consideration Answers: username_1: @username_0 1. You can directly map ProductCreateDto to ProductDto and then insert a new translation record using ProductCreateDto's Name field and current language. 2. Using a mapping is not a good idea here I guess. You can check Product.Translations array for the updated language and insert/update appropriate record in your app service method. Status: Issue closed username_1: Please reopen if that doesn't work for you.
e-XpertSolutions/f5-rest-client
285831056
Title: Example of initializing nested struct Question: username_0: Hello! Do you have an example of initializing a nested struct such as the [Pools field on the Wideip struct](https://github.com/e-XpertSolutions/f5-rest-client/blob/master/f5/gtm/wideip_a.go#L33)? I've tried various different ways and I'm unable to make it work. Is there a reason why you wouldn't make this field use one of your custom types instead of a slice of structs? Thank you and keep up the great work! Answers: username_0: Hi! Any chance you can help me out with this one? :-) Thanks! username_1: Hi @username_0, Sorry for my late reply. The reason why it is a nested struct is that this part of the code has been generated automatically and the generator only creates nested structures. But it would probably be better to use a custom type instead :-) For now, here is the way to initialize a nested list of structs: ``` foo := gtm.Wideip{ Pools: []struct{ Name string `json:"name,omitempty"` NameReference struct { Link string `json:"link,omitempty"` } `json:"nameReference,omitempty"` Order int `json:"order,omitempty"` Partition string `json:"partition,omitempty"` Ratio int `json:"ratio,omitempty"` }{ { Name: "some-name", Order: 1, Partition: "Common", }, }, } ``` Basically it is an anonymous struct and as such must be re-declared in the exact same way as the one from gtm.Wideip, including the json annotations. Why do you need to initialize this list manually btw? It is returned by the iControl REST API on GET requests but you AFAIK you cannot use it for creation and update. username_0: Thanks! I wanted to use it for creation and update, if you can’t use this for these purposes how should you do it? username_1: I would need to do some tests to answer your question. I'll test this in our lab and I'll get back to you ;-) username_1: So, I made some tests and I was wrong. You can use the **Pools** field to provide _existing pool_ objects in order to link the Wideip object to the pools during creation and update, as you wanted to use it. Note that the Pools must already exist or need to be _created beforehand_. They won't be created on the fly. Status: Issue closed
KonstantinEger/neural-net-rs
736425409
Title: Documentation for NeuralNet.feed_forward Question: username_0: [Branch](https://github.com/username_0/neural-net-rs/tree/develop) Documentation comments for the feed_forward method is missing. <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. --> Answers: username_0: Closed with Pullrequest #5 Status: Issue closed
angular/angular
401349330
Title: docs: update examples to use static Injector Question: username_0: <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 Oh hi there! 😄 To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅--> # 📚 Docs or angular.io bug report ### Description <!-- ✍️edit:--> A clear and concise description of the problem... ## 🔬 Minimal Reproduction ### What's the affected URL?** <!-- ✍️edit:--> https://angular.io/... ### Reproduction Steps** <!-- If applicable please list the steps to take to reproduce the issue --> <!-- ✍️edit:--> ### Expected vs Actual Behavior** <!-- If applicable please describe the difference between the expected and actual behavior after following the repro steps. --> <!-- ✍️edit:--> ## 📷Screenshot <!-- Often a screenshot can help to capture the issue better than a long description. --> <!-- ✍️upload a screenshot:--> ## 🔥 Exception or Error <pre><code> <!-- If the issue is accompanied by an exception or an error, please share it below: --> <!-- ✍️--> </code></pre> ## 🌍 Your Environment ### Browser info <!-- ✍️Is this a browser specific issue? If so, please specify the device, browser, and version. --> ### Anything else relevant? <!-- ✍️Please provide additional info if necessary. --> Associated PR with DI api updates Status: Issue closed Answers: username_0: Fixed via #29729
santoshphilip/eppy
477748451
Title: error in parallel running Question: username_0: Hi, I am new to python and eppy. I am trying to run files in parallel as an example before, but get an error, what is the format structure ? Can you please help? `from eppy import modeleditor from eppy.modeleditor import IDF from eppy.runner import run_functions modeleditor.IDF.setiddname('C:\EnergyPlusV8-9-0\Energy+.idd') idfsourceFolder1 = "C:\Users\SALI5786\OneDrive Corp\OneDrive - Atkins Ltd\Desktop\energyplus\collection\output_idf1\Iteration-001" idfname1 = "\2019-07-31_Change1.idf" idfsourceFolder2 = "C:\Users\SALI5786\OneDrive Corp\OneDrive - Atkins Ltd\Desktop\energyplus\collection\output_idf1\Iteration-002" idfname2 = "\2019-07-31_Change2.idf" epwfile = full_path jobs = [] for i in range(1): jobs.append( [ modeleditor.IDF(fname1.format(i), epwfile.format(i)), modeleditor.IDF(fname2.format(i), epwfile.format(i)) {'output_directory': 'C:/Users/SALI5786/OneDrive Corp/OneDrive - Atkins Ltd/Desktop/energyplus'.format(i), 'ep_version': '8-9-0'} ] ) run_functions.runIDFs(jobs, 1)` error as syntax error on ep_version....: File "<ipython-input-102-567c3c7f12a6>", line 36 {'output_directory': 'C:/Users/SALI5786/OneDrive Corp/OneDrive - Atkins Ltd/Desktop/energyplus'.format(i), **ep_version: 8-9-0}** ^ SyntaxError: invalid syntax Thank you Answers: username_1: Hi @username_0 Thanks for opening this here. Looking at your code, there are a few issues to take care of. I've made changes and left comments where I've changed things. Let me know if there are any things which are unclear, or if you hit any more errors. ``` import os from eppy.modeleditor import IDF from eppy.runner import run_functions # here and below I removed `modeleditor` since it's not imported, and you already imported `IDF` IDF.setiddname('C:/EnergyPlusV8-9-0/Energy+.idd') # changed all \ to / to avoid character escaping problems # define the parts of the file path energyplus_folder = "C:/Users/SALI5786/OneDrive Corp/OneDrive - Atkins Ltd/Desktop/energyplus" idf_source_folder = "collection/output_idf1" name_template = "Iteration-00{i}/2019-07-31_Change{i}.idf" # the `{i}`s are substituted by `idfname.format(i=i)` later # then join the parts together idfname = os.path.join( energyplus_folder, idf_source_folder, name_template ) # need to provide an .epw file path, for example epwfile = "C:/EnergyPlusV8-9-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw" jobs = [] for i in range(1): # each time around the loop, we add a new job to the list of jobs jobs.append( [ IDF(idfname.format(i=i), epwfile), # substitution happens here # I removed the second IDF from here - we can only add one at a time {'output_directory': energyplus_folder, 'ep_version': '8-9-0'} ] ) # then finally we run the list of jobs, and the results come out in the energyplus_folder directory run_functions.runIDFs(jobs, 1) ``` username_0: Thanks you! However i'm still getting an error `from eppy.modeleditor import IDF from eppy.runner import run_functions IDF.setiddname('C:/EnergyPlusV8-9-0/Energy+.idd') #ep folder epf_folder = "C:/Users/SALI5786/OneDrive Corp/OneDrive - Atkins Ltd/Desktop/energyplus" idf_source_folder = "collection/output_idf1" of_folder = "Iteration-00{i}" name_template = "2019-07-31_Change{i}.idf" out_folder = os.path.join( epf_folder, idf_source_folder, of_folder, name_template ) idfname = os.path.join( epf_folder, idf_source_folder, of_folder ) epwfile = full_path jobs = [] for i in range(1,4): jobs.append( [ IDF(idfname.format(i=i), epwfile), {'output_directory': out_folder, 'ep_version': '8-9-0'} ] ) run_functions.runIDFs(jobs, 3)` can you please look at this? ![image](https://user-images.githubusercontent.com/53047942/62612266-f8fc5080-b8fe-11e9-8b99-34b33747ccfa.png) can you see the image with an error? username_0: thank you! I changed it a bit, as I need results to be stored in the file host folder there is no error, however the code doesn't seem to do what I thought it would it creates a folder 00i with some results ![image](https://user-images.githubusercontent.com/53047942/62613390-45e12680-b901-11e9-8ec9-720057f3ff19.png) the aim is to run a file from each folder and store results in a corresponding folder `from eppy.modeleditor import IDF from eppy.runner import run_functions IDF.setiddname('C:\\EnergyPlusV8-9-0\\Energy+.idd') epf_folder = "C:\\Users\\SALI5786\\OneDrive Corp\\OneDrive - Atkins Ltd\\Desktop\\energyplus" idf_source_folder = "collection\\output_idf1" of_folder = "Iteration-00{i}" name_template = "2019-07-31_Change{i}.idf" out_folder = os.path.join( epf_folder, idf_source_folder, of_folder, ) idfname = os.path.join( epf_folder, idf_source_folder, of_folder, name_template ) epwfile = full_path jobs = [] for i in range(1,5): jobs.append( [ IDF(idfname.format(i=i), epwfile), {'output_directory': out_folder, 'ep_version': '8-9-0'} ] ) run_functions.runIDFs(jobs, 5)` can you please have look Regards, Polina username_1: You also need to use `format` on the output directory in the loop: {'output_directory': out_folder.format(i=i), 'ep_version': '8-9-0'} username_0: oh yeah, it worked! Thank you very much for your help and such a quick reply! username_1: Glad to help. All the best with python and eppy, and we're here if you need help. Status: Issue closed
tidyverse/tibble
399417404
Title: Printing bug with integer64 columns Question: username_0: It seems to happen when the number of digits in the variable changes: ``` df <- data.frame(x=as.integer64(1:5), y=as.integer64(1:5 * 250)) df # x y # 1 1 250 # 2 2 500 # 3 3 750 # 4 4 1000 # 5 5 1250 as_data_frame(df) # # A tibble: 5 x 2 # x y # <S3: integer64> <S3: integer64> # 1 1 " 250" # 2 2 " 500" # 3 3 " 750" # 4 4 1000 # 5 5 1250 ``` A `str()` shows that the actual data is fine, it's only the printed output that is affected. Status: Issue closed Answers: username_1: Seems fine now: ``` r library(tibble) library(bit64) #> Loading required package: bit #> Attaching package bit #> package:bit (c) 2008-2012 <NAME> (GPL-2) #> creators: bit bitwhich #> coercion: as.logical as.integer as.bit as.bitwhich which #> operator: ! & | xor != == #> querying: print length any all min max range sum summary #> bit access: length<- [ [<- [[ [[<- #> for more help type ?bit #> #> Attaching package: 'bit' #> The following object is masked from 'package:base': #> #> xor #> Attaching package bit64 #> package:bit64 (c) 2011-2012 <NAME> #> creators: integer64 seq : #> coercion: as.integer64 as.vector as.logical as.integer as.double as.character as.bin #> logical operator: ! & | xor != == < <= >= > #> arithmetic operator: + - * / %/% %% ^ #> math: sign abs sqrt log log2 log10 #> math: floor ceiling trunc round #> querying: is.integer64 is.vector [is.atomic} [length] format print str #> values: is.na is.nan is.finite is.infinite #> aggregation: any all min max range sum prod #> cumulation: diff cummin cummax cumsum cumprod #> access: length<- [ [<- [[ [[<- #> combine: c rep cbind rbind as.data.frame #> WARNING don't use as subscripts #> WARNING semantics differ from integer #> for more help type ?bit64 #> #> Attaching package: 'bit64' #> The following object is masked from 'package:bit': #> #> still.identical #> The following objects are masked from 'package:base': #> #> :, %in%, is.double, match, order, rank df <- data.frame(x = as.integer64(1:5), y = as.integer64(1:5 * 250)) df #> x y #> 1 1 250 #> 2 2 500 #> 3 3 750 #> 4 4 1000 #> 5 5 1250 as_tibble(df) #> # A tibble: 5 x 2 #> x y #> <int64> <int64> #> 1 1 250 #> 2 2 500 #> 3 3 750 #> 4 4 1000 #> 5 5 1250 ``` <sup>Created on 2019-08-07 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)</sup>
trailofbits/polytracker
765906357
Title: 惠来县哪里有真实大保健(找特色服务(十微IO77I9O9) Question: username_0: 惠来县哪里有真实大保健(找特色服务妹子【+V:10771909】题材接连“撞车”和品牌综艺集体乏力,成为了近些年国内综艺市场的关键词:除了上“社交恋爱类”、“亲子代际类”已成为“撞车”重灾区之外,据各大卫视和视频网站的年招商会片单显示,演技竞技类、推理类和经营类综艺也将遭遇同质化境况。随着各大视频平台对外扩张的步伐愈加坚定,为了摆脱优质独家内容的匮乏,各方纷纷发力自制内容。其中,今日头条在自制综艺领域动作频频,以极具辨识度的垂直细分题材为切口,持续注入优秀制作力量,推动内容表达的多元化。由今日头条、伊诺传媒联合出品,星途汽车独家冠名的旅行体验类真人秀《星动时辰》便是这样一档节目。近日,该节目温情收官,收获了不少用户的喜爱,成为年开年不可多得的优质精品内容。旅行治愈感的糅合:新题材的新探索快节奏的城市生活中,假期自由成为了奢侈的梦想。周末的黄金时间,成为城市年轻群体唯一能够释放压力、自我调节的突破口。《星动时辰》正是把握住当下社会共通的、对于“旅行治愈”的诉求,以明星组队挑战度过最丰富的小时周末为契机,解锁各地美景美食,通过两天一夜的旅途,给观众带来沉浸式的触发体验。放眼微综艺市场,无论是旅行、职场品类还是美食、人文历史领域,今日头条对原创微综艺精品内容的高质探索从未停止。作为其垂直领域的新尝试,《星动时辰》无论是题材、模式、内容还是品质层面,显然都是用心且极具诚意的。从题材来看,在旅行类综艺市场趋于饱和之际,今日头条深耕旅行综艺内容,把握住了公众对娱乐内容愈发垂直细分化的市场需求,也看到了城市年轻群体内心的精神渴求。《星动时辰》将“旅行”与“治愈感”相结合,不仅借由节目镜头解锁各地美景美食与文化,也在旅行类综艺市场做出了先锋性的尝试。有网友评论道:“世界那么大,我想去看看”。《星动时辰》在我看来就是这样的一档综艺。其实讲真,毅然决然地递个辞职信然后去走四方对于大部分人来说还是困难的,像我这样的小白领,唯一能做的就是利用周末的时间好好去放松一下自己,就像一个机器需要休息,需要润滑油,甚至需要重启一样。所以节目设定的小时,一个周末的时间,就很妙。不是漫长的旅游,而是在短时间内如何释放自己释放压力。另外,值得注意的是,《星动时辰》并非简单粗暴地收割艺人流量。在短短分钟左右的节目时长里,一组嘉宾分为上下两期,每期以单天的行程为展开。无论是方家翊和小甜甜这对好友在无人公路的孤独与清醒、在内蒙古乌兰哈达的考验与收获,还是“虔诚夫妇”袁成杰陈芊芊在路途中的打闹欢笑、在双山岛的甜蜜假日,节目正是选择了合适的人来做新鲜的事,并借嘉宾之口传达对自然的喜爱、对世界的期待和对自我的思考归根结底都是在为内容创新服务。今日头条在内容上的创作经今日头条聚集了海量的旅游兴趣用户,据头条指数显示,“旅游”的相关资讯月均阅读量分别高达亿,这无疑让《星动时辰》,天然具备话题属性与爆款潜质。虽是涉足户外的创新尝试,但今日头条在垂类微综艺的探索与制作道路上,已积攒了多款口碑、收视皆佳的经典之作。如前段时间的《公路美食家》开创了旅行美食游记的微综新范本,也实现了商业价值的提升。在《公路美食家》的美食与美景中,促使观众去寻找和欣赏全新一代瑞虎的广告片段最终实现“广告即内容”的最高境界这也是今日头条推广的高明之处。当前,今日头条平台上已有《头条有约》《头条人物》等访谈类节目、《头条君来了》《说好的六点见》等资讯类节目,以及与新世相合作的访谈类微综艺《人生选择题》,与视频推出的纪录片《燃点》等多元内容。凭借不断提升内容品质、深挖平台内容价值的专注力,今日头条的内容生态图景正日益清晰、成熟和丰茂。凰自料云时纸镁驮沿液炕弛敲俏澈https://github.com/trailofbits/polytracker/issues/1943 <br />https://github.com/trailofbits/polytracker/issues/2865 <br />https://github.com/trailofbits/polytracker/issues/2743 <br />https://github.com/trailofbits/polytracker/issues/2052 <br />https://github.com/trailofbits/polytracker/issues/2804 <br />https://github.com/trailofbits/polytracker/issues/3205 <br />https://github.com/trailofbits/polytracker/issues/2512 <br />https://github.com/trailofbits/polytracker/issues/2222 <br />jpsnfpaspmeifqnmvskvsptcx
MattesGroeger/vim-bookmarks
525247485
Title: gvim 8.1.1 errors Question: username_0: Hello, I thought I'd report this error I am seeing: in _gvimrc: let g:bookmark_sign = '>>' The first time I set a bookmark I see error you see in the attached pic. <img width="724" alt="vimerror" src="https://user-images.githubusercontent.com/43004704/69183339-1335f500-0ad0-11ea-8501-78a2e7d5624c.PNG"> I can ignore the error and gvim goes on to work properly. If a file has a bookmark in it, it will throw the error, again, I simply hit enter and it continues work. Incidentally, vim, in a gnome terminal works just fine with the same vim profile. Thanks. Answers: username_0: Please accept my apologizes. I forgot that I had ruled out vim-bookmarks right after I hit post!!! The actual culprit is: set encoding=utf-8 so off to figure this out now. Thanks. Status: Issue closed
andmorefine/since-co
829572228
Title: お問い合わせ | andmorefine Question: username_0: Good Morning Buy face mask to protect your loved ones from the deadly CoronaVirus. We wholesale N95 Masks and Surgical Masks for both adult and kids. The prices begin at $0.19 each. If interested, please check our site: pharmacyoutlets.online Enjoy, お問い合わせ | username_0
stripe/stripe-android
705617061
Title: Make it possible for the user to decide, on a per-card basis, whether to attach the card or not Question: username_0: <!-- Please only file issues here that you believe represent actual bugs or feature requests for the Stripe Android SDK. If you're having general trouble with your Stripe integration, please reach out to support using the form at https://support.stripe.com/ (preferred) or via email to <EMAIL>. Otherwise, to make it easier to diagnose your issue, please fill out the following: --> ## Summary The braintree sdk provides an [option to add a "save card" checkbox](https://github.com/braintree/braintree-android-drop-in/blob/4.6.0/Drop-In/src/main/java/com/braintreepayments/api/dropin/DropInRequest.java#L221) in the card entry screen, which will determine if the card will be persisted in their backend or not: <img width="453" alt="braintree" src="https://user-images.githubusercontent.com/9365138/93773217-22219280-fc20-11ea-9856-6174222e9a20.png"> The stripe sdk provides a slightly related option when launching the `AddPaymentMethodActivity` screen, in `AddPaymentMethodActivityStarter.Args.shouldAttachToCustomer`. This isn't enough to give the user control though: * This flag determines whether a newly added card will be attached to the customer or not. It doesn't determine whether an option for the user to decide to attach it will be shown or not * If we rely on the stripe sdk's UI, our entry point isn't `AddPaymentMethodActivity` , but rather `PaymentMethodsActivity`. `PaymentMethodsActivityStarter.Args` doesn't provide any option regarding this. Also: `AddPaymentMethodRowView.createCard` hardcodes that new cards should be attached to the customer: ```kotlin internal fun createCard( activity: Activity, args: PaymentMethodsActivityStarter.Args ): AddPaymentMethodRowView { return AddPaymentMethodRowView( activity, R.id.stripe_payment_methods_add_card, R.string.payment_method_add_new_card, AddPaymentMethodActivityStarter.Args.Builder() .setBillingAddressFields(args.billingAddressFields) .setShouldAttachToCustomer(true) <------------ HERE .setIsPaymentSessionActive(args.isPaymentSessionActive) .setPaymentMethodType(PaymentMethod.Type.Card) .setAddPaymentMethodFooter(args.addPaymentMethodFooterLayoutId) .setPaymentConfiguration(args.paymentConfiguration) .setWindowFlags(args.windowFlags) .build() ) } ``` Ideally, we'd like: * A new api in `PaymentMethodsActivityStarter.Args` like `allowAttachToCustomerOverride(boolean)` * This value would be propagated to the `AddPaymentMethodActivityStarter.Args` inside `AddPaymentMethodRowView.createCard` * `AddPaymentMethodActivity` would use this flag to show a checkbox or not * `AddPaymentMethodActivity` would take into account the checkbox state, in the existing [shouldAttachToCustomer](https://github.com/stripe/stripe-android/blob/v15.1.0/stripe/src/main/java/com/stripe/android/view/AddPaymentMethodActivity.kt#L53) lazy val. ## Installation method `implementation "com.stripe:stripe-android:15.1.0"` ## SDK version 15.1.0 ## Other information <!-- Anything else you can include that'll make it easier for us to help you! --> Answers: username_1: @username_0 thanks for filing. This definitely makes sense as a feature to add. We'll add it to our backlog. username_1: @username_0 This functionality is available via PaymentSheet. We don't plan to include this functionality in `AddPaymentMethodActivity`. Status: Issue closed
lmasello/Tp-Taller-de-Programacion-2-Shared-Server
222048920
Title: Endpoint to get song popularity Question: username_0: ![](https://github.trello.services/images/mini-trello-icon.png) [Obtener puntuacion (GET /tracks/{trackID}/popularity)](https://trello.com/c/KMuOck6f/38-obtener-puntuacion-get-tracks-trackid-popularity) Status: Issue closed Answers: username_0: ![](https://github.trello.services/images/mini-trello-icon.png) [Obtener puntuacion (GET /tracks/{trackID}/popularity)](https://trello.com/c/KMuOck6f/38-obtener-puntuacion-get-tracks-trackid-popularity) Status: Issue closed
denilsonsa/batterymon-clone
366741416
Title: AttributeError: 'NoneType' object has no attribute 'endswith'` Question: username_0: Hi, when I launch batterymon I receive the following error: `Traceback (most recent call last): File "/usr/local/bin/batterymon", line 577, in <module> main() File "/usr/local/bin/batterymon", line 560, in main theme = Theme(cmdline.theme) File "/usr/local/bin/batterymon", line 260, in __init__ if not self.validate(theme): File "/usr/local/bin/batterymon", line 306, in validate if not self.file_exists(self.get_icon(icon)): File "/usr/local/bin/batterymon", line 290, in get_icon return os.path.join(self.iconpath, "battery_%s.png" % (name,)) File "/usr/lib/python2.7/posixpath.py", line 70, in join elif path == '' or path.endswith('/'): AttributeError: 'NoneType' object has no attribute 'endswith'` I use Debian distribution and fluxbox. Thanks in advance.
gocodebox/lifterlms-blocks
604820832
Title: Issue with the Table block Question: username_0: ### Reproduction Steps + create a page + insert a table block with 2 rows and 2 columns + click on a cell ### Expected Behavior + cell focused and nothing else ### Actual Behavior + cells are shifted ### Error Messages / Logs + n/a ### System Report LifterLMS 3.37.19 ### 6. Browser, Device, and Operating System n/a ### Related User Information https://wordpress.org/support/topic/plugin-messes-up-gutenberg-table-editor/ As the user says: _From source I can see that it creates an extra DIV before active cell._<issue_closed> Status: Issue closed
darlinghq/darling-docs
993477447
Title: Add command to load kernel module Question: username_0: On section Building and Installing, after `sudo make lkm_install`: ```` /darling/build$ sudo make lkm_install [sudo] password for User: Scanning dependencies of target lkm_install Installing the Linux kernel module make[4]: Entering directory '/darling/src/external/lkm' Running kernel version is 5.11.0-34-generic make -C /lib/modules/5.11.0-34-generic/build M=/darling/src/external/lkm modules_install make[5]: Entering directory '/usr/src/linux-headers-5.11.0-34-generic' INSTALL /darling/src/external/lkm/darling-mach.ko At main.c:160: - SSL error:02001002:system library:fopen:No such file or directory: ../crypto/bio/bss_file.c:69 - SSL error:2006D080:BIO routines:BIO_new_file:no such file: ../crypto/bio/bss_file.c:76 sign-file: certs/signing_key.pem: No such file or directory DEPMOD 5.11.0-34-generic Warning: modules_install: missing 'System.map' file. Skipping depmod. make[5]: Leaving directory '/usr/src/linux-headers-5.11.0-34-generic' make[4]: Leaving directory '/darling/src/external/lkm' Built target lkm_install ```` Add command to guide: ```` lsmod | grep darling_mach || sudo modprobe darling_mach ```` Answers: username_1: Why? The kernel module is autoloaded by darling shell. username_0: So when you run `darling shell` it's loaded automatically? Then please add such comment to Build guide, please. Status: Issue closed
CoolProp/CoolProp
114794128
Title: Incompressibel docs Question: username_0: Add more incompressible docs and mention the partial derivatives: ```Python import CoolProp.CoolProp as CP rho = CP.PropsSI('D','T',273.15+25,'P',10e5,'INCOMP::MEG-50%') drhodT = CP.PropsSI('d(Dmass)/d(T)|P','T',273.15+25,'P',10e5,'INCOMP::MEG-50%') print(-1.0/rho * drhodT) ``` Functions available: `drhodTatPx`, `dsdTatPx`, `dhdTatPx`, `dsdTatPxdT` , `dhdTatPxdT`, `dsdpatTx`, `dhdpatTx`. Note that all partial derivatives require a constant concentration, which is denoted by the `x`, but this `x` is not included in the derivative string: `drhodTatPx` translates to `d(Dmass)/d(T)|P`. Answers: username_0: I updated the docs, but they are not finished. - [ ] Check which quantities can be calculated from: drhodTatPx, dsdTatPx, dhdTatPx, dsdTatPxdT , dhdTatPxdT, dsdpatT and the other properties, see example above for compressibility. - [ ] Implement the derived quantities - [ ] Update the docs.
imsanjoykb/German-Language-Learning-Resource
908514622
Title: Beispiele at German Language Question: username_0: German Language: Beispiele:উদারণসমূহ -ie-ইংরেজি e-ঈ মত উচ্চারণ হবে। যেমনঃfrieden-ফ্রিডেন-শান্তি। -eu-”অয় “এর মত উচ্চারণ হবে। যেমনঃ freuen-প্রয়েন-খুশি হওয়া। -au-”আউ”এর মত উচ্চারণ হবে। যেমনঃ Frauen-ফ্রাউয়েন- মহিলা। আরেকটি অক্ষর আছে যেটাকে Eszett (ß) oder scharfes S বলে।
kubernetes/enhancements
404856378
Title: Graduate the kube-controller-manager ComponentConfig to v1beta1 Question: username_0: # Enhancement Description - One-line enhancement description (can be used as a release note): Usage of the kube-controller-manager configuration file has graduated from experimental, as the API version now is v1beta1 - Primary contact (assignee): @username_0 - Responsible SIGs: @kubernetes/sig-api-machinery-api-reviews @kubernetes/wg-component-standard - Design proposal link (community repo): N/A - Link to e2e and/or unit tests: - Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @username_1 @deads2k - Approver (likely from SIG/area to which enhancement belongs): @username_1 @deads2k - Enhancement target (which target equals to which milestone): - Alpha release target (x.y) - Beta release target (x.y) v1.14 - Stable release target (x.y) v1.15 The kube-controller-manager ComponentConfig is currently in v1alpha1. The spec needs to be graduated to v1beta1 and beyond in order to be usable widely. /assign @username_1 @deads2k Answers: username_1: controller manager doesn't even consume a config file currently, and the existing v1alpha1 config is not serializable. I'd expect serialization and config file loading while still in alpha to be the first stage, then promotion to beta the second step. username_0: Yep, that indeed makes sense. This was just automatically generated as per request for tracking in k/enhancements overall. For this specific case, I can change to just "k-c-m ComponentConfig" and mark alpha for v1.14 (serializable) username_2: +1 for alpha first. username_3: Are we also planning on splitting the k-c-m config into per-controller kinds? username_4: +1 for alpha first. :smile: Had some pre-discuss with username_2 before username_0: @username_3 yes. username_2: We also discussed this at KubeCon with @username_15 and @username_0. It think this topic deserves a KEP to think through the usability implications of that. I can imagine how it is to choose the right v1alpha1, v1beta1, v1, v2 version on a per-controller basis. This is getting complicated. I can also see reason why we might want that though. username_5: @username_0 I don't see a KEP for this issue linked - I'm removing it from the 1.14 milestone as having a KEP in an implementable state is a requirement for 1.14. To get this issue added back please submit an exception request username_6: @username_0 I'm the enhancement lead for 1.15. I don't see a KEP filed for this enhancement and per the guidelines, all enhancements will require one. Please let me know if this issue will have any work involved for this release cycle and update the original post reflect it. Thanks! username_7: @username_6, I've scheduled the KEP thing for discussion with component-standard-wg on Tuesday. I believe we want to work toward this for 1.15 -- some of the dependencies are progressing slowly. @username_4 already has some WIP: https://github.com/kubernetes/kubernetes/pull/70359 username_6: @username_7 end of day tomorrow is Enhancement Freeze for 1.15. A KEP must be merged and in an implementable state to be considered a part of the 1.15 release. I don't see a high probability of that happening. cc @username_9 @craiglpeters @username_5 username_7: @username_6 -- discussed on WG call, this will wait till 1.16 Thanks for checking up :+1: username_8: Awesome. A big step forward. 👍 username_9: Hey there @username_0, I'm one of the 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the [1.16 Tracking Spreadsheet](http://bit.ly/k8s116-enhancement-tracking). If it's not graduating, I will remove it from the milestone and change the tracked label. Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly. As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements. Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29. Thank you username_10: Hello @username_0 ,1.17 Enhancement Shadow here! 🙂 I wanted to reach out to see **if this enhancement will be graduating to alpha/beta/stable in 1.17?

** Please let me know so that this enhancement can be added to [1.17 tracking sheet](https://bit.ly/k8s117-enhancement-tracking). Thank you! <br> 🔔Friendly Reminder - The current release schedule is - Monday, September 23 - Release Cycle Begins - Tuesday, October 15, EOD PST - Enhancements Freeze - Thursday, November 14, EOD PST - Code Freeze - Tuesday, November 19 - Docs must be completed and reviewed - Monday, December 9 - Kubernetes 1.17.0 Released - A Kubernetes Enhancement Proposal (KEP) must meet the following criteria **before Enhancement Freeze** to be accepted into the release - PR is merged in - In an `implementable` state - Include test plan and graduation criteria - All relevant k/k PRs should be listed in this issue username_11: /remove-lifecycle stale username_12: Hey there @username_7 @username_0 -- 1.18 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to alpha|beta|stable in 1.18? The current release schedule is: Monday, January 6th - Release Cycle Begins Tuesday, January 28th EOD PST - Enhancements Freeze Thursday, March 5th, EOD PST - Code Freeze Monday, March 16th - Docs must be completed and reviewed Tuesday, March 24th - Kubernetes 1.18.0 Released To be included in the release, this enhancement must have a merged KEP in the implementable status. The KEP must also have graduation criteria and a Test Plan defined. If you would like to include this enhancement, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍 We'll be tracking enhancements here: http://bit.ly/k8s-1-18-enhancements Thanks! username_12: Enhancements Freeze is in 7 days. If you seek inclusion in 1.18 please update as requested above. Thanks! username_11: /remove-lifecycle stale username_12: Hi @username_0 @username_7 ! 1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19? In order to have this part of the release: The KEP PR must be merged in an implementable state The KEP must have test plans The KEP must have graduation criteria. The current release schedule is: Monday, April 13: Week 1 - Release cycle begins Tuesday, May 19: Week 6 - Enhancements Freeze Thursday, June 25: Week 11 - Code Freeze Thursday, July 9: Week 14 - Docs must be completed and reviewed Tuesday, August 4: Week 17 - Kubernetes v1.19.0 released Please let me know and I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍 Thanks! username_12: As a reminder, enhancements freeze is tomorrow May 19th EOD PST. In order to be included in 1.19 all KEPS must be implementable with graduation criteria and a test plan. Thanks. username_12: Unfortunately the deadline for the 1.19 Enhancement freeze has passed. For now this is being removed from the milestone and [1.19 tracking sheet](https://bit.ly/k8s-1-19-enhancements). If there is a need to get this in, please file an [enhancement exception](https://github.com/kubernetes/sig-release/blob/master/releases/EXCEPTIONS.md). username_13: /remove-lifecycle stale username_12: Hi @username_0 @username_7 Enhancements Lead here. Are there any plans for this 1.20? Thanks! Kirsten username_13: I think the primary contacts here need to be updated with folks from api machinery willing to push on this effort. username_14: It looks like there is actually not a KEP for this, and given the comment history here I'm guessing no one has bandwidth to write / review one. I suggest closing this and reopening when there's collective appetite. username_15: @obitech is planning on writing a KEP when he gets some more bandwidth, but agree that it can be closed in the meantime. username_12: Hi all, Me again :) There seems to be agreement that this issue will be closed. Is someone going to close it? Thanks, Kirsten username_15: I don't have permission to close it. Anyone on this thread feel free. Status: Issue closed username_12: Ok closing this by @username_15 request :smile: Please reopen when necessary.
lttkgp/metadata-extractor
646426317
Title: extraAttrs failing in majority of cases Question: username_0: The `__extraAttrs` function in raising exceptions for majority of cases. For the first 25 posts, it is returning valid values only for 2 link. The cases where the target divs are not found in the soup need to ba handled more gracefully. In case `extraAttrs` fails, `None` value must be returned to avoid breaking the script where it is called<issue_closed> Status: Issue closed
conda-forge/cmake-feedstock
180805822
Title: ccmake files Question: username_0: When I try to run `ccmake ../path` I end up with the following error: ``` Error running cmake::LoadCache(). Aborting. ``` I wonder if #13 will fix this, not sure. Answers: username_1: On Mac, cmake versions 3.7 and 3.8. ``` $ ccmake . dyld: Library not loaded: @rpath/libform.5.dylib Referenced from: /Users/shadow_walker/anaconda/envs/eman-env/bin/ccmake Reason: image not found Trace/BPT trap: 5 ``` username_1: My error is not the same as in OP, but still fits the title. Should this be a separate issue? Status: Issue closed
nir0s/ghost
179087383
Title: Allow MultiFernet key usage Question: username_0: This will allow us to provide multiple keys for encryption and decryption if the user chooses it and will look somewhat like this: Answers: username_0: Unfortunately, this is currently irrelevant as the MultiFernet key only allows to encrypt using the first key provided. Status: Issue closed
GovernIB/pluginsib-arxiu
852099157
Title: El paràmetre plugin.arxiu.caib.aplicacio.codi no funciona correctament Question: username_0: Quan des de l'aplicació RIPEA es configura aquest paràmetre, la metadada eni:app_tramite_doc dels documents creats no agafa aquest valor. Answers: username_0: Després de fer vàries proves hem vist que el paràmetre funciona correctament.
atweiden/voidvault
544381949
Title: Grub load error Question: username_0: Hi. I tried the latest stable version of this script from your custom iso. The EFI install did not create any boot entries, so I tried a BIOS install and receive a 'Grub load error' message. The install appeared to work correctly, although it did mention some dracut modules would not be installed due to missing files, so is that the problem? Answers: username_1: Also, could you clarify what you mean by this? Note there is no BIOS install mode or EFI install mode in vv; it installs both grub bios and efi support simultaneously. Are you bootstrapping void using `voidvault new` without modifying the source code? username_1: Just thought of the other, more likely, possibility: If you're using the voidvault source code shipped with my aging custom iso, which it sounds like you are and which is highly inadvisable, that version of voidvault does not specify `cryptsetup luksFormat --types luks1` (see: https://github.com/username_1/voidvault/commit/718488a85cd8cac9bc96d445d80f21c861eca0f5). Not specifying this cryptsetup option worked perfectly well prior to cryptsetup-2.1.0. In cryptsetup-2.1.0 and later, cryptsetup began to default to `--types luks2`, which is incompatible with Grub, including recent versions of Grub. Please ensure you're using the latest stable version of voidvault, which is 1.10.0. Use it from the official isos, and you should be fine. vv-1.10.0 should also work fine from my custom void iso though, just make sure you grab that version and don't use something from 2018. username_0: Yes I'm running things unmodified following all the instructions. I only meant that I tried installing once with my BIOS set for UEFI and once with it set for CSM compatibility mode. I've also tried mounting the EFI partition to change the location of the .efi file to see if my BIOS picks it up as suggested [here](https://wiki.voidlinux.org/Installation_on_UEFI,_via_chroot) . Thanks for the help. username_0: dracut: dracut module 'bootchart' will not be installed, because command '/sbin/bootchartd' could not be found! dracut: dracut module 'modsign' will not be installed, because command 'keyctl' could not be found! dracut: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! dracut: dracut module 'plymouth' will not be installed, because it's in the list to be omitted! dracut: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! dracut: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! dracut: dracut module 'dmsquash-live-ntfs' will not be installed, because command 'ntfs-3g' could not be found! dracut: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! dracut: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! dracut: dracut module 'multipath' will not be installed, because command 'multipath' could not be found! dracut: dracut module 'stratis' will not be installed, because command 'stratisd-init' could not be found! dracut: dracut module 'stratis' will not be installed, because command 'thin_check' could not be found! dracut: dracut module 'stratis' will not be installed, because command 'thin_repair' could not be found! dracut: dracut module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! dracut: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! dracut: dracut module 'fcoe-uefi' will not be installed, because command 'dcbtool' could not be found! dracut: dracut module 'fcoe-uefi' will not be installed, because command 'fipvlan' could not be found! dracut: dracut module 'fcoe-uefi' will not be installed, because command 'lldpad' could not be found! dracut: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! dracut: dracut module 'usrmount' will not be installed, because it's in the list to be omitted! username_1: It very well could be the case your particular 64-bit laptop requires a 32-bit GRUB installation. I've intentionally [excluded support](https://github.com/username_1/voidvault/commit/2dd1ed4d4e8f321e05ada0963833b751c481b0e5) for such hardware to this date. I would not object to a pull request implementing a flag handling this edge case for the completely haywire machines requiring it, but depending on how much of an eye-sore it inflicts on the CLI, I may or may not merge it. I would really only consider implementing it if I myself located a modern and compelling piece of hardware that had a credible explanation for this insane requirement. Sorry. The dracut output is fine. username_0: So I unscrewed the SSD holding the voidvault installation from the secondary bay caddy in the laptop to put it back in the primary slot and voidvault booted. Laptop BIOS must not be able to boot properly from a secondary drive. Sorry to waste your time, and I am now enjoying the voidvault installation on two computers. Status: Issue closed username_1: Sounds like a more generally applicable case than I expected. I'll test this if/when I obtain the requisite hardware. Apologies for any hassle caused.
cloudfoundry/java-buildpack
351566390
Title: Supply additional libraries into tomcat/lib Question: username_0: I have a proprietary framework that is normally operated by standing up a windows server, installing tomcat, and installing framework libraries into tomcat/lib. These are shared by all applications running on that server. Then the application itself Answers: username_1: You basically have two options: 1. Fork the buildpack and add an additional component. We have done this not only to have jar files in tomcat/lib, but also to use native shared libraries as well. If you know Ruby this is pretty easy, if you are teaching yourself Ruby just for this one task it can be an issue. 2. Use the .profile.d script to move jar files from your application deployment to tomcat/lib. username_2: @username_0 my example will help you https://github.com/username_2/demo-payara-micro5/blob/hazelcast/pom.xml#L67-L73 https://github.com/username_2/demo-payara-micro5/blob/hazelcast/.profile username_3: I think one of the first things to take into account is that only a single application will ever run in a Tomcat instance on Cloud Foundry. Therefore, including libraries in `$CATALINA_HOME/lib` is unnecessary as they'll never be shared. Instead, all of the libraries required by an application should be container and versioned with that application. This prevents the lock-step upgrade problem that has been so prevalent in enterprise Java development to date. If you still need to include libraries within your `$CATALINA_HOME/lib` directory (and again, we strongly discourage this), [External Configuration](https://github.com/cloudfoundry/java-buildpack/blob/master/docs/container-tomcat.md#external-tomcat-configuration) would be the proper way to handle it. Status: Issue closed username_0: Oh I get it, trust me. I know they won't be shared across apps, because each in it's own instance, etc. However the libraries are engineered in such a way that if you include the runtime libraries in the app/lib, it will cause a conflict because some of those classes are also defined in the libraries in tomcat/lib. It is what it is. So I am going down the path with External Configuration. Using this method I was able to include my dependent jars. Thanks all! username_1: If you fork the buildpack, you are not limited to a single war file anymore. We had an internal format with ./apps/*.war ./lib/*.jar and ./config/*.[xml|properties] directories in a tar file which I extended pretty easily with a fork of the buildpack. As always, the real question you have to ask yourself is maintaining that customization work the functionality you get - usually the answer is no. But for transition support, saying that you can just drop your old builds into PCF and it just works can overcome a lot of resistance.
samson-int/jquery.fancybox
652204411
Title: Нет зазора между картинками Question: username_0: Из-за этого не нестандартных разрешениях соседние картинки просвечивают. ![image](https://user-images.githubusercontent.com/1856145/86767806-36101c00-c055-11ea-8da5-600527ee146c.png) Answers: username_0: Исправилось в [`1.4.5`](https://github.com/samson-int/jquery.fancybox/releases/tag/1.4.5). Status: Issue closed
contributte/webpack
739176963
Title: Contributted Question: username_0: - packagist - contributte site - namespace - docs - readme Answers: username_1: @username_0 první dva body jsou na tebe, já pak otaguju 2.0.0 a bude ✅ username_0: Packagist ready https://packagist.org/packages/contributte/webpack Status: Issue closed username_1: Aaaand released [2.0.0](https://github.com/contributte/webpack/releases/tag/2.0.0) 🎉
nicoverbruggen/phpmon
655081735
Title: Release notarized binaries from now on Question: username_0: Who likes Gatekeeper being annoying because the app is not signed with a developer ID and/or because of a lack of notarization? No one. <img width="682" alt="image" src="https://user-images.githubusercontent.com/3715845/87209974-325ded00-c314-11ea-93d5-4b0d314fd694.png"> Close this issue when v2.1 is released (notarized version).<issue_closed> Status: Issue closed
dart-lang/sdk
530662292
Title: _WebSocketProtocolTransformer.add() doesn't properly handle Uint8List.view() args Question: username_0: Instead of `_payload.add(new Uint8List.view(buffer.buffer, index, payloadLength));` in https://github.com/dart-lang/sdk/blob/master/sdk/lib/_http/websocket_impl.dart#L222 this should probably be `_payload.add(new Uint8List.view(buffer.buffer, buffer.offsetInBytes + index, payloadLength));` Dart VM version: 2.5.0 (Fri Sep 6 20:10:36 2019 +0200) on "macos_x64" Answers: username_0: To add a little more context, I implemented io.Socket to use a SSH tunnel: https://github.com/username_0/dartssh/blob/master/lib/socket_io.dart#L73 It would be most optimal to call controller.add() with a Uint8List.view() but this is not currently possible. (And results in strange failure behavior that took me awhile to debug) I have also made the same mistake (took view of Uint8List failing to account for when the input is itself a view), and have a utility function to protect against it: https://github.com/username_0/dartssh/blob/master/lib/serializable.dart#L15 A broader fix could be considered. I tested the patch described in the initial report for several cases successfully. I also signed the contributor release and attempted to upload a patch for review but it seems that I lack permissions to do so. Thanks! username_1: Hi @username_0, thanks for reporting this issue and coming up with a fix! I can confirm that your analysis seems right and buffer.offsetInBytes should've been used. I can make the change for you if you'd like? Alternative you can upload a pull request here on github, which will verify your CLA and automatically migrate the change to our Gerrit review tool. You can also upload the change changelist directly to https://dart-review.googlesource.com using the `git cl upload` command from depot\_tools as described in [CONTRIBUTING.md](https://github.com/dart-lang/sdk/blob/master/CONTRIBUTING.md). You should already have all the right permissions for that (you'll need to create an account first). If our procedure for contributing isn't working, I'll be happy to debug it, it'll be useful if you can say exactly what's going wrong. But again, I can make the change for you in the SDK if you prefer that? You seem to have discovered a systemic issue. The `Uint8List.view` API doesn't lend itself well to nested use cases and it's easy to get it wrong. A quick search of our codebase flags lots of uses of the API without `offsetInBytes`. Many of them seem correct at a glance, but there's probably other mistakes like the one you encountered here. We'll probably want to review these cases and think about how to avoid this problem in the future since it seems like an easy mistake to make. username_0: Hi @username_1, thanks for looking into this issue! Sure! Thanks for taking over. (Oops I think I missed creating the account when I tried uploading. In the future I'll try sending a normal PR.) I think adding a `Uint8List.viewUint8List()` would help. Or if allowing breaking changes, rename `Uint8List.view` to `Uint8List.viewBuffer` and make `Uint8List.view` accept `Uint8List` arguments instead of `ByteBuffer`. A breaking change would trigger/coerce a review of those tricky cases. username_1: We have a [breaking change policy](https://github.com/dart-lang/sdk/blob/master/docs/process/breaking-changes.md). I discussed some potential API improvements with @lrhn that could be done in a compatible fashion. username_1: I gave it a try to fix all such instances in the SDK libraries at <https://dart-review.googlesource.com/c/sdk/+/127165>. Thanks for reporting this issue! username_1: Alright, I landed the fix. It won't be in the upcoming 2.7 release, but it will be in the next full release after that. It'll appear in a future dev release towards the next full release, or you can build Dart from git to get the fix now. Thanks for reporting the issue! We drafted up a proposal for a TypedData.sublistView() API at <https://dart-review.googlesource.com/c/sdk/+/127321> to avoid this kind of problem in the future.
nuxt/typescript
978281066
Title: @babel/plugin-transform-parameters not working correctly Question: username_0: **Describe the bug** When using babel in conjunction with nuxt typescript I run into an issue using @nuxt/babel-preset-env, which contains @babel/plugin-transform-parameters. Default parameters should be transpiled for compatibility with e.g. IE11 but in the resulting bundles they are still present: `... t.enc.Base64url={stringify:function(t,e=!0){ ... }} ...` <= `e=!0` and `... return o.join("")},parse:function(t,e=!0){ ... } ...` <= `e=!0` **To Reproduce** My config is: nuxt.config.js ``` build: { babel: { presets: [ [ '@nuxt/babel-preset-app', { ignoreBrowserslistConfig: true, }, ], ], }, }, ``` tsconfig.json ``` { "compilerOptions": { "target": "ES2018", "module": "ESNext", "moduleResolution": "Node", "lib": [ "ESNext", "ESNext.AsyncIterable", "DOM" ], "esModuleInterop": true, "allowJs": true, "sourceMap": true, "strict": true, "noEmit": true, "experimentalDecorators": true, "baseUrl": ".", "paths": { "~/*": [ "./*" ], "@/*": [ "./*" ] }, "types": [ "@nuxt/types", "@types/node", "@nuxtjs/axios" ] [Truncated] "node_modules", ".nuxt", "dist" ] } ``` **Expected behavior** Default parameters should be transpiled correctly `... t.enc.Base64url={stringify:function(t,e){ ... }} ...` **Additional context** I could not track down the source of these functions yet, but I'm using: - crypto-js - js-cookie - swiper - vue-awesome-swiper - vuex-module-decorators
planningcenter/developers
229212068
Title: API Support for Church App Logins Question: username_0: ### Detailed Description of the Problem/Question Can the PCO API be integrated with church apps, such as Subsplash or eChurch, in such a way that certain parts in the app require logging in using PCO user credentials? Further, can this login activity be recorded in the Activity section in the user's profile in PCO? ##### Steps to reproduce: ##### API endpoint I'm using: ##### Programming language I'm using: ##### Authentication method I'm using: e.g. OAuth 2, Personal Access Token, Browser Session (Cookie) Answers: username_1: Not yet, but we are working on that. username_0: @username_1 Can you share which church app platform is going to be interoperable with it? What is the target timeline? username_1: @username_0 We are currently rebuilding our login system. After that is done we will make it so any congregant can login (right now it is only admins and volunteers in Services). With those changes any church app should be able to make the Planning Center login the main login for your app. Logging login activity is a great idea and I'll submit that as a feature request. Status: Issue closed username_2: Any update on this feature? username_1: @username_2 this is still in the works, no scheduled release yet though. Sorry.
exportarts/ngx-prismic
443065794
Title: Content Validation for `Paragraphs` should accept `Paragraphs` as input Question: username_0: The current validation functions `getDefaultParagraphs()` and `setDefaultParagraphs()` only accept `string | string[]` as fallback content. It should be possible to provide `Paragraph | Paragraphs` to enable formatted/styled content.<issue_closed> Status: Issue closed
boonebgorges/buddypress-group-email-subscription
293194803
Title: Daily mailing not working Question: username_0: Hi, i install your plugin and it work perfect. I just want to set the mailing to daily because some groups have huge members. When a user post something i a stream a takes a lot of time to process. Unfortunately the daily function is not working. When i check the ?sum=1 there are no error and the queue is listing well. But whatever I try mails are not sending. Is there a solution for this? Thank you! Answers: username_1: How many members are in your large groups? You say "huge" - does this mean dozens, hundreds, thousands? Are you certain that *no* emails are going out? It's possible that the groups are large enough that PHP is running out of memory partway through, so that only some members are getting their digests. I'd suggest checking outgoing mail logs to see if anyone is getting the digests, as well as PHP error logs to see if there are fatal errors, "Allowed memory ... exhausted" notices, or "Maximum execution time..." errors. This will help narrow down the issue. username_0: Hi Boone, the biggest group has 180 members. When i post a message in a activity stream(with the all mail option enabled) it takes about 20+ seconds. The function working correct. Everybody in this group gets an e-mail. When i set the group settings to **daily** the mails are in the queue but never sending out. I check the PHP error log and its clear. username_1: Did you check the outgoing mail log? When you say that the mails are in the queue, have you verified that they are *all* in the queue, for *all* members? Are other scheduled tasks working properly on your WP installation? If you schedule a post to be published in the middle of the night, does it work? username_0: Hi Boone, Thanks for your quick reply. 1. I use SMTP->Mailgun for sending outgoing mails. 2. Sorry but i check the ?sum=1 page. I thought that was a queue for sending mails. But there are no mails in the mail queue. I just plan a new blogpost and it's not working. I plan the posts for 19.47 and now i have the message. Planned failed. Strange?! username_1: Yes, ?sum=1 is a queue for pending digests. If you don't see any items when you visit `?sum=1`, it means that the digest routine has run. I'm not sure I understand what you are saying about the new blogpost - perhaps your cron jobs were stuck for some reason, and when that post published, it also pushed out the queued digests? username_0: OK. I cannot plan a blogpost! It doesn't automatically publish the post. I get the error Planned failed. If i publish the post by hand(manually), the blogpost is working but the /?sum=1 page is still the same (not empty but full of queue mail) username_1: Got it. I guess this means that your WP cron system is not working. This could be caused by a handful of things. Check Google for some resources: https://encrypted.google.com/search?hl=en&ei=oBFyWqPMMdHQsAXghouoCA&q=troubleshooting+wordpress+cron&oq=troubleshooting+wordpress+cron&gs_l=psy-ab.3..33i22i29i30k1.1725.2779.0.2935.14.9.0.3.3.0.189.980.3j5.8.0....0...1c.1.64.psy-ab..3.11.1025...0j0i22i30k1.0.AHhQW2hPwxU username_0: OK. I take a look! And let you know when i fixed the problem. username_0: Hi Boone, i fixed the problem. I manually add a cronjob and everything is working now. I have another question. Is it possible to set HTML mail? username_0: I see this: HTML EMAILS The digest and summary emails are sent out in multipart HTML and plain text email format. This makes the digest much more readable with better links. The email is multipart so users who need only plain text will get plain text. But my daily mail is plain text. I use Exchange / Outlook. username_0: I see this in a posts: Do you receive BuddyPress HTML emails from other BP notifications such as at-mention or private message emails? If you are getting plain-text emails for other BP email notifications, then the issue is coming from somewhere else. The mail from buddypress are also plain text username_1: Great, sounds good. I'm going to close this ticket. If HTML email is not working for any BP emails, then that's your core problem. This could be happening because of the use of some third-party tool (like your SMTP plugin). Any plugin that overrides `wp_mail()` will, by default, disable BP's mails. You can try overriding this logic as follows: add_fliter( 'bp_email_use_wp_mail', '__return_false' ); but this might break SMTP integration. Status: Issue closed
danielmichaels/RFC.py
566028887
Title: Tests fail if ~/.rfc/ does not exist Question: username_0: ``` ... [ 47s] self = <tests.test_utils.TestUtils testMethod=test_categories> [ 47s] [ 47s] def setUp(self): [ 47s] if not os.path.exists(Config.TESTS_FOLDER): [ 47s] > os.mkdir(Config.TESTS_FOLDER) [ 47s] E FileNotFoundError: [Errno 2] No such file or directory: '/home/abuild/.rfc/tests' [ 47s] [ 47s] tests/test_utils.py:24: FileNotFoundError ``` After manually creating ~/.rfc/ , all tests pass. :+1: Answers: username_1: Thank you for raising this, and all the other issues - I did not see these until now. They are all valid, and fixable. If you feel the need please raise a PR for this and any other the others. Status: Issue closed username_1: I am closing this, the fix was so simple. Again thank you for raising this issue and apologies for the delayed response.
sensu/sensu-go
932626295
Title: sensu-backend build with go >1.15 Question: username_0: When using a backend build with go >1.15 you can run into issues with certificates Some agents will not be able to connect to the backend and will log the following error: ```json {"component":"agent","error":"x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0","level":"error","msg":"reconnection attempt failed","time":"2021-06-29T11:07:51+02:00"} ``` More info from Golang release notes: https://golang.google.cn/doc/go1.15#commonname ## Expected Behavior sensu-agent should connect to the backend ## Current Behavior sensu-agent cannot connect to the backend ## Possible Solution As noted in the error you can set the environment variable on the **agent** and restarting the sensu agent process after setting the environment variable. ## Steps to Reproduce (for bugs) The issue is only present on our Windows agent. I' ll be looking deeper on the certificates and how they were generated. ## Your Environment sensu-backend running Ubuntu 18.04 with backend build yesterday ( https://github.com/sensu/sensu-go/tree/d1480a492024045acac62772f62666204ad7f93d ) sensu-agent 6.4.0.4826 on Windows 2012 (and most likely also 6.3 too) Answers: username_0: After some investigation it is not related to the backend but to agent trying to connect to the sensu backend where the cert has been issued by puppet (default behavior on the sensu-go puppet module) but does not include a SAN. More info here: https://tickets.puppetlabs.com/browse/SERVER-2338 username_1: This issue has been mentioned on **Sensu Community**. There might be relevant details there: https://discourse.sensu.io/t/wss-connection-issues-on-sensu-deployed-by-puppet/2637/3 username_2: Hi, while this is unfortunate, it seems out of our control for the reasons mentioned in the issue description. I believe steps to remedy the problem have already been linked above, so I'm closing this. Status: Issue closed
hexojs/eslint-config-hexo
314057289
Title: enable "Require confirmation of pull requests before merging" Question: username_0: Should I enable "Require confirmation of pull requests before merging" setting in the master branch? Answers: username_1: Agree. BTW, is there a setting for this? username_0: see https://help.github.com/articles/about-protected-branches/ username_2: I just enabled protected branch on master, but that did not protect against a direct edit on master via the web interface. That seems strange or I did something wrong...? username_3: You are one of the owners of the org, so you can still modify the master branch directly (just like merge a PR without review). That's intended. username_2: Understood. I believe we can close this issue then! Status: Issue closed
MicrosoftDocs/azure-docs
398277477
Title: C# Classes? Question: username_0: Any example classes for the response? It appears you're responding with JSON that required me to define specific class types for each entry. Why not a generic list with a property to indicate language. Be happy to see JsonConvert.DeserializeObject() example into strongly typed class structure. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3de39fe1-ff32-27af-bfdb-dcad5a9d2da6 * Version Independent ID: 28d24d9e-1f35-78b6-f0c6-77728039ae5b * Content: [Quickstart: Translate text, C# - Translator Text - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/translator/quickstart-csharp-translate) * Content Source: [articles/cognitive-services/Translator/quickstart-csharp-translate.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/Translator/quickstart-csharp-translate.md) * Service: **cognitive-services** * GitHub Login: @username_1 * Microsoft Alias: **username_1** Answers: username_1: Please assign to @Jann-Skotdal. @Jann-Skotdal - Can you have someone from engineering take a look at this and provide feedback? Status: Issue closed username_2: Thank you for taking the time to share your product and documentation feedback with us. Your input is valued because it helps us create the right documentation for our customers. Due to the volume of issues in our queue, we are closing open issues older than 90 days. We hope to continue hearing from you. Thank you. username_1: @username_0 - This is long overdue, but we're in the process of rolling out changes to the C# samples. I wanted to share an early sample with you even though this issue is marked as closed. Let us know what you think. ```csharp // This sample uses .NET Core 7.1 or later for async/await. using System; using System.Collections.Generic; using System.Net.Http; using System.Text; using System.Threading.Tasks; // Install Newtonsoft.Json with NuGet using Newtonsoft.Json; namespace TranslateTextSample { /// <summary> /// The C# classes that represents the JSON returned by the Translator Text API. /// </summary> public class TranslationResult { public DetectedLanguage DetectedLanguage { get; set; } public TextResult SourceText { get; set; } public Translation[] Translations { get; set; } } public class DetectedLanguage { public string Language { get; set; } public float Score { get; set; } } public class TextResult { public string Text { get; set; } public string Script { get; set; } } public class Translation { public string Text { get; set; } public TextResult Transliteration { get; set; } public string To { get; set; } public Alignment Alignment { get; set; } public SentenceLength SentLen { get; set; } } public class Alignment { public string Proj { get; set; } } public class SentenceLength { public int[] SrcSentLen { get; set; } public int[] TransSentLen { get; set; } } class Program { // Async call to the Translator Text API [Truncated] } } } } static async Task Main(string[] args) { // This is our main function. // Output languages are defined in the route. // For a complete list of options, see API reference. string host = "https://api.cognitive.microsofttranslator.com"; string route = "/translate?api-version=3.0&to=de&to=it&to=ja&to=th"; string subscriptionKey = "YOUR_KEY_GOES_HERE"; Console.Write("Type the phrase you'd like to translate? "); string textToTranslate = Console.ReadLine(); await TranslateTextRequest(subscriptionKey, host, route, textToTranslate); } } } ``` username_0: Hi, Thanks for getting back to me. Yeah, we needed something like that. I’ve had that feature in backlog for a while so it will be nice to get closure on this from my end. I think any time any C# code that is getting back raw JSON Microsoft code should expose actual classes, this isn’t the JavaScript world after all. I would strongly recommend the API have all these POCO classes defined so that C# coders can consume things very easily. Indeed, I’d wrap the whole call into an easy to use API like Storage or Fluent or any of the other Azure Services we frequently code against. Not sure why you’re following a different pattern from other services we touch. Another suggestion is an ENUM of supported languages as well. Use Attributes to decorate the enum with proper strings to submit to the API so we don’t have to go looking for that documentation as well. Also.. What is .NET Core 7.1? I thought .NET core as on version 3. Is this supposed to be C# 7.1? username_1: @username_0 - Thanks for the feedback. I'm going to pass everything along to our ENG teams. With regards to .NET Core 7.1 -- that's a typo on my end and should be C# v 7.1 or later.
rollup/rollup
551069935
Title: Use only as bundler Question: username_0: I have a non-standard use case where I need rollup to **only bundle** some npm packages. 1. All I want to do is bundle the npm packages and load it to the global namespace, which an independent javascript file will assume it exists. 2. Presumably tree shaking will have to be disabled because rollup will assume that nothing is actually being used in the packages. ```javascript // File: imports.js global.main = { ipcMain: require('electron').ipcMain, uuidv4: require('uuid/v4'), } ``` I can't find an example that shows that it can be done. ```javascript import builtins from "builtin-modules"; import resolve from "@rollup/plugin-node-resolve"; import commonjs from "@rollup/plugin-commonjs"; import { terser } from "rollup-plugin-terser"; ... { input: "imports.js", treeshake: false, output: { file: "imports.out.js", format: ?, // what format should it be for just loading to global namespace sourcemap: false }, external: ["electron", ...builtins], // require('electron') should be left alone plugins: [ resolve(), json(), commonjs(), terser({ecma: 6}) ] }, ``` Answers: username_0: That is what I've tried but it doesn't seem to do anything. Thank you in advance. Status: Issue closed
symfony/symfony
347726282
Title: Use json env resolver in configurations Question: username_0: **Symfony version(s) affected**: 4.1 **Description** I want to be able to configure swiftmailer delivery_adresses via env var. In prod env on production server it should be null, in prod env on staging server it should have some value. Looks like work for env parameters. I try to set it like ```yaml swiftmailer: delivery_addresses: '%env(json:MAILER_DELIVERY_ADDRESSES)%' ``` I specify `json` env processor, and in my env file ```env MAILER_DELIVERY_ADDRESSES='["<EMAIL>"]' ``` ``` A dynamic value is not compatible with a "Symfony\Component\Config\Definition\PrototypedArrayNode" node type at path "swiftmailer.mailers.default.delivery_addresses". ``` Json processor return array, and so it should be valid to pass such value to `PrototypedArrayNode`. Answers: username_1: This is by design. A prototyped array is still part of the semantic configuration tree. The reason behind this is env vars can be any value during runtime, but the tree needs to be validated during compile time. In this case it would be: ``` swiftmailer: delivery_addresses: - '%env(MAILER_DELIVERY_ADDRESS_1)%' - '%env(MAILER_DELIVERY_ADDRESS_2)%' ``` username_2: I am closing here as this is the expected behaviour. Status: Issue closed username_0: @username_1 it wouldn't work. This is equal to `[null]`, so count of `delivery_addressess` is bigger than 0. See https://github.com/symfony/swiftmailer-bundle/blob/master/DependencyInjection/SwiftmailerExtension.php#L336 Looks like env should be convenient replacement of `parameters.yml` but in fact it is not. username_1: environment variables and parameters have different usecases, so i wouldnt call it a replacement per se. Let alone to blindly update all parameters to env vars. You say this doesnt work? ```yaml swiftmailer: delivery_addresses: - '%env(MAILER_DELIVERY_ADDRESS_1)%' - '%env(MAILER_DELIVERY_ADDRESS_2)%' ``` username_0: I try to say that when yours config will be converted to php array it become `[null, null]`. There is two members. `null` and `null`. Count of such array is 2 not 0. Thus Swiftmailer will try to send all email not to real users, but to this two emails. I should either pass `null` *instead* of array, or array with zero members. Only then swiftmailer will send email to real users. On my production server config should be equals to: ```json swiftmailer: delivery_addresses: null or swiftmailer: delivery_addresses: [] ``` On my staging server config should be equals to: ```json swiftmailer: delivery_addresses: - test@email ``` With env var it is impossible to do so. username_1: @username_0 i got it :) thx. Currently this is impossible indeed, but it can be done by prepending a dynamic config in php ... https://github.com/symfony/symfony/blob/510977dd193b21a18b86855cba9f9dfef2d9d0bb/src/Symfony/Component/DependencyInjection/ContainerBuilder.php#L695 username_1: A permanent solution could be to expand the env syntax, in an effort to expose more type information. ``` %env(string[]:json:MAILER_DELIVERY_ADDRESSES)% ``` Here `string` is an existing prefix and the special `[]` notation would tell us this is an array of strings, which is compatible with the `swiftmailer.delivery_addresses` node. Moreover, it would actually be validated as such during runtime. cc @username_3 username_3: Or we patch the bundle to make it accept an array and deal with it in a factory. I'd prefer this over adding more complexity. username_4: Is there any proposed solution that would allow the use of the json processor? Using : ``` swiftmailer: delivery_addresses: - '%env(MAILER_DELIVERY_ADDRESS_1)%' - '%env(MAILER_DELIVERY_ADDRESS_2)%' ``` Is actually unusable if I wish to have 2 emails in one environment and 4 in another one. The dynamism allowed by the arrayNode is broken. Thank you! username_4: Hi guys, I've managed to make the code work manually overwriting : ``` // In Symfony\Component\Config\Definition\ArrayNode /** * {@inheritdoc} */ protected function allowPlaceholders(): bool { return false; } // to /** * {@inheritdoc} */ protected function allowPlaceholders(): bool { return true; } ``` No errors were thrown. Can anyone shed some light on why was it set to false and, maybe, the negative repercussions to setting it to "true"? username_1: because it would only work "by chance", e.g. if the env is set correctly. At this point you can change the env to any value and it would always pass config validation. See #29270 for a different approach. username_5: is there any progress with this issue? username_6: Regardless of how the configuration tree of the swiftmailer bundle is set up: what _is_ the correct way to allow dynamic values for a scalar array node in the configuration tree of your _own_ bundle? I am unable to figure out the correct way to do this. username_7: I've just spent ages trying to work out why the example in the docs isn't working. Has anything changed in the past 2 years to allow this to work somehow? If not, might be worth updating the docs to remove this as a bad example to save others time?
thomasp85/lime
776363301
Title: Question about LIME results Question: username_0: Hi, I am trying to use LIME on R, and I need some explanations about the results produced by the package. I have 144 records with 25 parameters each. Records are labeled “Cat” or “C3” depending on the parameter's value. If I run a LIME analysis with the command: **model <- train(b_train, b_lab, method = 'rf')** two times with the same test record I receive two different results (see annex). Please can you explain to me this behaviour ? ![S001A](https://user-images.githubusercontent.com/34463553/103342812-c66e6d00-4a8a-11eb-943c-65f3ea3b070f.png) ![S001B](https://user-images.githubusercontent.com/34463553/103342836-d1290200-4a8a-11eb-87dc-db8cd7e17bb6.png) Answers: username_1: I guess that's because of the instability of the LIME framework. [](https://towardsdatascience.com/instability-of-lime-explanations-3e0efc00a7de) Personally, I think if your predictions have no noise then they should be stable. Not sure about this though.
aws/aws-appsync-community
427599575
Title: update of schema.graphql in production Question: username_0: Hi, What are the best practices when we update schema.graphql in production ? I have tested to add attributes to existing objects and I have seen that ancient datas dit not change. But what if in the case of removing attributes, adding new objects or removing ancients ?<issue_closed> Status: Issue closed
2020PB/police-brutality
698336554
Title: Los Angeles, CA Question: username_0: INSTRUCTIONS: Add the key information about the incident under the headings Location, Date, Description and Links below and then click the green "Submit new issue" button. If you are unsure about any details then please write "Unknown" under that heading. --- ## Location e.g. Compton, California (also include this in the title field above) ## Date September 8th ## Description LA Sheriffs tackle independent journalist of his bike and then arrest him ## Links https://twitter.com/ClippingFor/status/1303586283504058368 https://www.youtube.com/watch?v=t3iFkuOWpYY https://www.foxla.com/news/a-dozen-protesters-arrested-at-south-los-angeles-demonstration-in-memory-of-dijon-kizzee<issue_closed> Status: Issue closed
MetaMask/eth-phishing-detect
527717345
Title: [Blacklist Request] Fake BLT token giveaway Question: username_0: https://bloom.reward-programs.erc20-tokens.com/ https://urlscan.io/result/36c6a307-f2c5-4f82-802c-c62a81e21137 https://bloom.reward-programs.erc20-tokens.com/myetherwallet.html?/access-my-wallet https://urlscan.io/result/33ad3119-5a78-49dc-9a47-2e8519b0b7ae Status: Issue closed Answers: username_1: Thanks for the report!
spring-projects/spring-boot
495537446
Title: When JAR was build on Windows, the launch script embedded in JAR file can not be started Question: username_0: Due to Windows line ending, the script fails to start in bash: ```plaintext -bash: ./application.jar: /bin/bash^M: bad interpreter: No such file or directory ``` Answers: username_1: Thanks for the report, but we'll need some more information to be able to diagnose the problem. - How are you building the jar? Both the Maven and Gradle plugins read and write the scripts as a stream of bytes so the script should be written as-is into the jar with no changes to line endings. - Are you using the default launch script or a custom script? - What versions of Windows and bash are you using? Is this a Linux subsystem running within Windows? Status: Issue closed username_0: Thanks for the feedback. We are indeed using a modified version of launch script (which I didn't realized until now). I just tested this with a pure Spring Boot project without modification. It turns out the line ending is good. username_2: Thanks for letting us know.
dart-lang/sdk
260711181
Title: Need a supported/stable way of invoking `main` from a module Question: username_0: The main issue today is that each module exports each dart library under a "scope", and the name of that scope roughly maps to the dart file name but it does some munging to make sure its a valid JS identifier. For instance see [this code](https://github.com/dart-lang/pub/blob/9e216e0a5e415aa12cf7f356547a3cb7b14f23a1/lib/src/dartdevc/dartdevc.dart#L122) in pub which replaces `.` with `$46`. We need a reliable way of invoking `main` that doesn't require one off fixes like this that aren't guaranteed to work in the future. Answers: username_1: Closing in favor of: #31516 username_1: Closing for real.
remote-job-boards/software-engineering
947163818
Title: Argyle [PyJobs]: Backend Software Engineer Question: username_0: **Tags:** #python #docker #golang #gcp #dev #engineer #backend #digital-nomad **Published on:** July 16, 2021 **Original Job Post:** https://remoteOK.io/remote-jobs/105124-remote-backend-software-engineer-argyle-pyjobs ![](https://remoteOK.io/assets/jobs/e7c1726cf2da0b7c2a27a34fd7d5c3961626453963.png) Argyle is a remote-first, Series A fast-growing tech startup that has reimagined how we can use employment data. Renting an apartment, buying a car, refinancing a home, applying for a loan. The first question that they will ask you is, "how do you earn your money?" Wouldn’t you think that information foundational to our society would be simple to manage, transfer and control? Well, it’s not! Argyle provides businesses with a single global access point to employment data. Any company can process work verifications, gain real-time transparency into earnings and view worker profile details. We are a fun and passionate group of people, all working remotely across 19 different countries and counting. We are now looking for Senior Backend Engineers to come and join our team. ## What will you do? - Experience and a big passion for API design, scalability, performance and end-to-end ownership - Design, build, and maintain APIs, services, and systems across Argyle's engineering teams - Debug production issues across services and multiple levels of the stack - Work with engineers across the company to build new features at large-scale - Managing k8s clusters with GitOps driven approach - Operating databases with large datasets - Concurrent systems programming ## What are we looking for - Enjoy and have experience building APIs - Think about systems and services and write high-quality code. We work mostly in Python &amp; Go. However, languages can be learned: we care much more about your general engineering skill than knowledge of a particular language or framework. - Hold yourself and others to a high bar when working with production systems - Take pride in working on projects to successful completion involving a wide variety of technologies and systems. - Thrive in a collaborative environment involving different stakeholders and subject matter experts #Salary and compensation $40,000 — $60,000/year ### Location 🌏 Worldwide<br><issue_closed> Status: Issue closed
MattKetmo/EmailChecker
184000106
Title: Compatibility with FosUserBundle Question: username_0: Hello, Is it compatible with FosUserBundle? Is it possible to add it by modifying validation.xml and use it as an assert or we can only use it from controllers with FosUserBundle ? Thank you, Regards Answers: username_0: Hello, Sorry my bad, here is how to do it, add this in validation.xml: <constraint name="EmailChecker\Constraints\NotThrowawayEmail"/> Regards Status: Issue closed
CodeDreamfy/CodeDemo
268462232
Title: 前端调试工具总结 Question: username_0: ### 代理工具 `Charles` MAC OS下常用的抓包工具 可以将线上文件代理为本地文件 `Fiddler` window下常用抓包工具,同样可以设置代理,也可以模拟发送请求,可以把线上文件代理到本地,`AutoResponder`选项 ### 多终端同步工具 `BrowserSync` 与 双向自动刷新样式工具`Emmet LiveStyle` ### 模拟器调试 Android使用`Genymotion`需要配合VirtualBox IOS模拟器调试使用XCode提供的Simulator ### 在线调试 在线Android模拟器`Manymo`, 免费 ### 多平台调试 网站响应式设计测试工具 `Ghostlab` ### 移动端web开发调试工具 `Weinre`是依赖nodejs的远程调试工具 ### js远程调试和测试工具`Vorlon.js` ### 云真机调试 `BrowerStack` ### web端移动设备管理控制工具STF ### 多浏览器兼容性测试平台F2etest
hhtokpinar/sqfEntity
1076483143
Title: how to use Sequence in table Question: username_0: i need to auto increment a non -primaryKey field,how to do this? Answers: username_0: ![image](https://user-images.githubusercontent.com/71307751/145526605-b9e1b2e5-c5a3-4faf-9553-39a4e82530cb.png) this code giving an error of " The method 'uu' isn't defined for the type 'ItemTypeTest'" in g.dart file username_1: How about defining sequence individual out of the table definition and referring to the sequence? username_1: There’s a sequence sample in the example and has no problem Status: Issue closed
kimushu/rubic-vscode
314388083
Title: Program break does not work on mac Question: username_0: <!-- Thank you for your report --> <!-- Please write issue in English or Japanese --> <!-- Conditions / 再現環境 --> - Rubic Version: 0.99.16 - VSCode Version: 1.21.1 - OS Version: mac OS High Sierra Steps to Reproduce: <!-- 不具合の再現手順 --> 1. Write endless program to GR-CITRUS ``` loop do led delay 500 end ``` 2. Start program via Rubic 3. Press "Stop debug" button Answers: username_0: Windows環境でも再現: バージョン 1.21.1 コミット 79b44aa704ce542d8ca4a3cc44cfca566e7720f1 日付 2018-03-14T14:46:47.128Z シェル 1.7.9 レンダラー 58.0.3029.110 Node 7.9.0 アーキテクチャ x64 username_0: GR-CITRUS side: Closed by wakayamarb/wrbb-v2lib-firm#22 Rubic side: Closed by 476d4b98391b766cda12c02d922e23751667940f Status: Issue closed
NamelessCoder/typo3-repository-client
100011450
Title: SOAP error: Failed Sending HTTP SOAP request Question: username_0: Maybe there's something off with the TER SOAP interface? Answers: username_1: Hi Mathias - have you been able to reproduce this since? Like the error message says, it sounds like temporary unavailability (maybe a service restart at an unlucky time during your upload). Status: Issue closed username_0: Looks better now but now I get the dreaded "supported version error" from #1 again, the `depends` are the still the same. username_1: I assume you've removed `cms` from the list of dependencies? And that all your dependency versions have both a min- and max version? And that the max supported TYPO3 versions is actually one that is released? - Just going through the usual causes ;) username_0: Just commented in #1, thanks for asking. ;-)
spirosikmd/angular2-focus
343845513
Title: update to angular 6 Question: username_0: hello guys, Do you plan to update this library? to be compatible with angular 6 Answers: username_1: Hi @username_0! Thank you for submitting this issue! Yes, the plan is to update the library to support angular 6 soon. Just for my understanding, have you tried it with an angular 6 project? Does something break? username_2: Probably you gonna need `rxjs-compat`, as ng 5 and 6 uses different versions of RxJS. username_1: Hi @username_0 and @username_2! Can you give this branch a try, just to make sure everything works properly? Thanks! Status: Issue closed username_1: :tada: This issue has been resolved in version 1.1.4 :tada: The release is available on: - [npm package (@latest dist-tag)](https://www.npmjs.com/package/angular2-focus) - [GitHub release](https://github.com/username_1/angular2-focus/releases/tag/v1.1.4) Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:
knative/docs
627576796
Title: improve webdev and pagespeed scores Question: username_0: Google's public website measurement tools indicate that knative.dev and the pages on that site could use some improvements. https://web.dev/measure/ https://developers.google.com/speed/pagespeed/insights/?url=knative.dev&tab=mobile https://developers.google.com/speed/pagespeed/insights/?url=knative.dev&tab=desktop https://www.thinkwithgoogle.com/feature/testmysite/ testmysite report: https://storage.googleapis.com/gweb-mobile-hub-test-my-site.appspot.com/public/reports/efd38062ead9419386ce04fdcfc45461.pdf I mainly looked at the homepage, but testmysite averages across the whole domain. I'm guessing we use some shared css etc across the whole site, and that fixing the homepage will improve most/all other pages as well, but others should be checked too. Possibly of interest: https://developers.google.com/speed/ Answers: username_0: Sounds like the current best tool from Google is https://web.dev/vitals/ https://webmasters.googleblog.com/2020/05/evaluating-page-experience.html username_1: @username_0 please raise this issue in the website repo, since it's not related to actual docs content. Thanks. Status: Issue closed username_0: Done: https://github.com/knative/website/issues/188
kristijanhusak/vim-packager
560460675
Title: vim-packager removes itself when running PackagerClean Question: username_0: vim-packager removes itself when running PackagerClean I get this: "Clean up [$MY_HOME]/.config/nvim/pack/packager/opt/vim-packager — Waiting for confirmation..." This is the part of my config. ```viml packadd vim-packager function! PackagerInit() abort call packager#init() call packager#add(... endfunction command! PackagerInstall call PackagerInit() | call packager#install() command! -bang PackagerUpdate call PackagerInit() | call packager#update({ 'force_hooks': '<bang>' }) command! PackagerClean call PackagerInit() | call packager#clean() command! PackagerStatus call PackagerInit() | call packager#status() ``` Answers: username_1: Do you have vim packager itself added as a package? ``` call packager#add('username_1/vim-packager', { 'type': 'opt' }) ``` You didn't paste any of packages here. username_0: Thanks. That was an issue. My bad. I was comparing with plug and last time put the packages I used in plug not as in documentation. Please close the issue. username_0: And thanks for so fast feedback. username_1: No problem. Closing. Status: Issue closed
nightwatchjs/nightwatch-docs
173316185
Title: Performance: docs page performance drops from repeated section changes Question: username_0: Possibly some kind of leak or something in the UI, but if you click back and forth on the navigation links Developer Guide and API Reference many times repeatedly on http://nightwatchjs.org/, scrolling becomes less responsive. I guess at around 15-20 clicks or so you should see a noticeable difference. Answers: username_1: yeah, probably the bootstrap affix or scrollspy aren't cleared properly. can you see if there's any improvement now? username_0: Oh, yeah, its much better now. There's still a little bit of a hit, but it's not enough to really matter, and this was from bouncing back and forth around 70 times or so Status: Issue closed
Shougo/defx.nvim
665543462
Title: How can i change the directory name color with the latest version defx Question: username_0: **Warning: I will close the issue without the minimal init.vim and the reproduction instructions.** # Problems summary When update the latest version . the default highlight of directory name seems wired with my colorscheme. So I want to change it withe this option `hi Defx_filename_directory guifg=red`. but failed... ## Expected change colorscheme . ## Environment Information * defx version(SHA1): latest * OS: macos 10.15.6 * neovim/Vim version: neovim-0.5-nightly * `:checkhealth` or `:CheckHealth` result(neovim only): Defx check is OK ## Provide a minimal init.vim/vimrc with less than 50 lines (Required!) ```vim set runtimepath+=~/.cache/vim/dein/repos/github.com/username_1/defx.nvim filetype plugin indent on syntax on call defx#custom#option('_', { \ 'resume': 1, \ 'winwidth': 30, \ 'split': 'vertical', \ 'direction': 'topleft', \ 'show_ignored_files': 0, \ 'columns': 'mark:indent:git:icons:filename', \ 'root_marker': '[in]: ', \ }) hi Defx_filename_directory guifg=red ``` ## The reproduce ways from neovim/Vim starting (Required!) 1. nvim - u mininit.vim 2. Defx 3. color of directory name don't change. ## Generate a logfile if appropriate ## Screen shot (if possible) ## Upload the log file Status: Issue closed Answers: username_2: @username_1 is there a way to know in column if nvim namespace is used? (`_ns` property in view) ? I don't want to add `syntax match` commands if not needed. username_1: You don't need to check. Because if namespace is used, the syntax is not defined. Please use `highlight_commands` to define highlights. username_2: @username_1 It's not working for me. I have to do same thing in the `__init__` that you do: https://github.com/username_2/defx-icons/commit/434dd17a0026ed5c251a8aecd2f15a3e5e0639e1#diff-b6d162fcea11dc4aea1ae69031651cdaR28 username_1: @username_2 Really? I need to test. Please provide the reproduce instructions. username_0: @username_1 It seems like does not work fine. username_1: You should check the highlight name in defx buffer. username_2: @username_1 nevermind, I fixed it. I cached syntax_name at beginning, it was wrong value. username_0: @username_1 I had updated defx. i will check. @username_2 i also use defx-git and defx-icons. maybe related? username_1: OK. username_0: @username_1 Also use the mininit.vim. I used `:syntax` command in defx buffer. got `No syntax items defined in this buffer` username_1: OK username_0: another question.. what highlight of root path. i tried the `Defx_root` `Defx_root_directory`. username_1: Please check highlight command result. Defx_filename_root_marker username_0: @username_1 Not marker. there ![image](https://user-images.githubusercontent.com/41671631/88455233-38bc9f00-cea6-11ea-9fc9-ee5b906bb59d.png) username_1: I have fixed it. `Defx_filename_root` username_0: @username_1 . Fixed when update defx. Thanks for work.
montlikadani/TabList
611228714
Title: Per-group TabList showing for all groups Question: username_0: <!-- These comments will not show just read it and you don't need to delete them.--> ### Problem <!--Understand what the problem is with the plugin.--> There are multiple groups, and the TabList for 1 group is showing for everyone (people not in the group) ### Details Plugin version: 5.1 Software <!--(Spigot/Bukkit etc.. `/version`) -->version: 1.15.2 (built with BuildTools) Relevant plugins<!-- (optional)-->: EssentialsX, LuckPerms, Vault ### Console error ``` Send the console error if you have it. ``` ### Configuration file(s) <!--Send the configuration file(s) to [pastebin.com](pastebin.com) or [hastebin.com](hastebin.com) or to other sites.--> [config.yml](https://pastebin.com/eZaXEaCU) [tablist.yml](https://pastebin.com/4DYePsYN) <!--Or if you using bungee then send the bungeeConfig.--> ### Screenshots (optional) <!--Send a few pictures about the problem if you can.--> When there is 1 player on the server, everything works fine, but when another player joins the TabList shows the wrong group's TabList Answers: username_1: You should update the plugin to the latest version. This has already been fixed. Status: Issue closed
andygrunwald/go-jira
171835372
Title: Getting weird auth Error : stream error: stream ID 1; REFUSED_STREAM" Question: username_0: I Test vs. a cloud instance of Jira. Any idea? Answers: username_1: Hey @username_0, can you please post your code to show what you are doing? And which cloud instance? Is this public? username_0: sure, ``` cl, err := jira.NewClient(nil, "https://demisto.atlassian.net/") cl.Authentication.AcquireSessionCookie(username, password) ``` username_0: it's jira as a service instance... username_0: full error: ``` Received unexpected error "Auth at JIRA instance failed (HTTP(S) request). Post https://demisto.atlassian.net/rest/auth/1/session: stream error: stream ID 1; REFUSED_STREAM" ``` username_0: @username_1 any idea? username_1: @username_0 Sorry, no idea right now. Did you solved the problem in the meantime? username_0: nope :( impl the lib by myself in the end username_1: How did you do this? What did you differently as go-jira? Can you maybe share your auth code / request code here? Maybe we can see the difference there. username_2: I encountered this error today as well. My code is nothing special: ```go client, err := jr.NewClient(nil, viper.GetString("jira.baseurl")) if err != nil { log.Fatal(err) } ``` This works when I'm running locally, but not when running in a docker container. username_1: @username_2 Any chance that i can reproduce / get the same setup for your cloud jira? To reproduce this error? username_1: Any news @username_2 ? username_3: @username_1 Also getting this error for a basic implementation, running this on my mac ``` client, err := jira.NewClient(nil, "https://instance.atlassian.net/") if err != nil { return nil, err } res, err := client.Authentication.AcquireSessionCookie(user, pass) if err != nil { log.Println("Auth result: ", res) return nil, err } return client, nil ``` username_2: Sorry, I missed the initial mention. I wasn't able to fix this error, only work around it by building locally (on osx with build flags for linux) and pushing the binary to docker. I _think_ it has to do with SSL and passing in an `http.Client` with modified options might work. username_0: yes I think @username_2 is right.. and you need to add this: ``` tls.Config{InsecureSkipVerify: true} ``` To the http client Status: Issue closed username_1: This seems to work now with a custom http client, right? If i am wrong, feel free to reopen this.
humanoid-path-planner/hpp_tutorial
260640779
Title: Couldn't run tutorials Question: username_0: Hello, During installation, I did `sudo apt-get install pr2-indigo-*`, because `sudo apt-get install pr2-indigo-desktop` or `sudo apt-get install pr2-indigo-robot` returns `E: Unable to locate package`. Nevertheless, I can run pr2 on Gazebo. I started tutorials, Step 1 and Step 2 is ok, but when I can't run any python scripts in `src/hpp_tutorial/script`. In tutorial_1, I get the output attached as .txt. [tutorial_1_output.txt](https://github.com/humanoid-path-planner/hpp_tutorial/files/1333430/tutorial_1_output.txt) I tried `debug.py`, it launches gepetto-viewer with an object. The image is attached ![gepetto_screenshot1](https://user-images.githubusercontent.com/29755707/30865082-85e7f55c-a2cd-11e7-8cfc-049217f4dc12.png) tutorial_2.py returns the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/oguz14/hppdev/install/lib/python2.7/site-packages/hpp/corbaserver/problem_solver.py", line 478, in selectPathPlanner return self.client.problem.selectPathPlanner (pathPlannerType) File "/home/oguz14/hppdev/install/lib/python2.7/site-packages/hpp/corbaserver/problem_idl.py", line 285, in selectPathPlanner return _omnipy.invoke(self, "selectPathPlanner", _0_hpp.corbaserver.Problem._d_selectPathPlanner, args) hpp.Error: hpp.Error(msg='No path planner with name PRM') Can you help me about this? Answers: username_1: Dear username_0, Which version did you select to download on page "Download and install" ? It seems that you did not choose "stable". Can you cd in hpp_tutorial, type git log and send the output ? username_0: The output is: commit 3adfe3ffbf0d5b5bae25cf5b711ee425dfd449f7 Author: <NAME> <<EMAIL>> Date: Wed May 31 22:12:42 2017 +0200 Update tutorial_manipulation.py commit de1e34b22144a8a5fc205b622b1f088f191e9c8a Author: <NAME> <<EMAIL>> Date: Wed May 17 15:00:57 2017 +0200 Force Python 2.7 and synchronize CMakeLists.txt commit 69d07fefce36cbaac6cec461ba1373029ef430d6 Author: <NAME> <<EMAIL>> Date: Wed Feb 15 17:10:27 2017 +0100 Update scripts and documentation to pinocchio version of HPP. commit 8e1e09e00ab74b50b1abfc4664a9c17c4559daf5 Merge: fa99098 91ba20c Author: <NAME> <<EMAIL>> Date: Wed Feb 15 16:22:10 2017 +0100 In "Download and Install, Stable is selected(this is also by default), but the installation steps are: `wget -O $DEVEL_DIR/config.sh https://raw.githubusercontent.com/humanoid-path-planner/hpp-doc/master/doc/config.sh` `wget -O $DEVEL_DIR/src/Makefile https://raw.githubusercontent.com/humanoid-path-planner/hpp-doc/master/doc/Makefile` I might change 'master' to another branch in these steps, but I don't know which one is stable. username_1: It seems that you ran "tutorial_1_roadmap.py" which is outdated. We should either update it or remove it. Can you try to run "tutorial_1.py" instead ? tutorial_manipulation.py should work as well, but you need to run "hpp-manipulation-server" instead of hppcorbaserver.
Shougo/neosnippet.vim
209997886
Title: tex files format doesn't detected without preambula Question: username_0: `neovim`, like `vim` doesn't detects correctly `tex` format. Is it possible to fix it via changing `init.vim`? `neovim` 0.2.0, `deoplete`, `dein` ```vim set nocompatible set runtimepath+=/home/username_0/.config/nvim/dein//repos/github.com/username_1/dein.vim if dein#load_state('/home/username_0/.config/nvim/dein/') call dein#begin('/home/username_0/.config/nvim/dein/') call dein#add('/home/username_0/.config/nvim/dein/repos/github.com/username_1/dein.vim') call dein#add('username_1/neosnippet.vim') call dein#add('username_1/neosnippet-snippets') call dein#add('username_1/deoplete.nvim') call dein#add('easymotion/vim-easymotion') call dein#add('ctrlpvim/ctrlp.vim') call dein#add('tpope/vim-commentary') call dein#end() call dein#save_state() endif if dein#check_install() call dein#install() endif filetype on filetype plugin indent on set clipboard+=unnamedplus syntax off imap jj <Esc> set backspace=indent,eol,start set colorcolumn=80 set textwidth=80 set autoindent autocmd Filetype python setlocal ts=4 sw=4 sts=0 expandtab autocmd Filetype c setlocal ts=8 sw=8 sts=0 expandtab autocmd Filetype javascript setlocal ts=2 sw=2 sts=0 expandtab autocmd Filetype tex setlocal ts=2 sw=2 sts=0 expandtab set tabstop=2 set shiftwidth=2 let g:EasyMotion_do_mapping = 0 nmap <Space> <Plug>(easymotion-overwin-f2) let g:EasyMotion_smartcase = 1 map <Leader>j <Plug>(easymotion-j) map <Leader>k <Plug>(easymotion-k) imap <C-k> <Plug>(neosnippet_expand_or_jump) smap <C-k> <Plug>(neosnippet_expand_or_jump) xmap <C-k> <Plug>(neosnippet_expand_target) call deoplete#enable() let g:deoplete#enable_at_startup = 1 ``` Answers: username_1: This is not neosnippet issue. You can use `set filetype=tex`. Status: Issue closed
erikbrinkman/d3-dag
711695342
Title: Custom node appearance Question: username_0: I'm not d3 fluent, I figured I should ask: Is there a way to change the appearance of the nodes/vertex? Answers: username_1: Hi @username_0, very much like in d3, you can change the appearance of nodes and links by assigning them a css class. This done by selecting the corresponding links / nodes and changing the "class" attribute: ```javascript (selected object).attr('class', 'yourclass'); ``` username_2: Yes, as @username_1 suggested, d3-dag is just for computing coordinates to help you lay out appropriate html / svg elements. If you look (for example) at the [observable demo](https://observablehq.com/@username_2/d3-dag-sugiyama), the actual image gets passed in an already laidout dag, and uses d3 to create the elements. Expanding the selection you;ll see code like: ``` // Select nodes const nodes = svgSelection.append('g') .selectAll('g') .data(dag.descendants()) .enter() .append('g') .attr('transform', ({x, y}) => `translate(${x}, ${y})`); ``` which is essentially creating svg groups, that are translated so that they're in the "correct" position from the layout. Then we do two steps: ``` // Plot node circles nodes.append('circle') .attr('r', 20) .attr('fill', n => colorMap[n.id]); ``` this takes those groups and adds a circle with radius 20 and the appropriate color. then we do: ``` // Add text to nodes nodes.append('text') .text(d => d.id) .attr('font-weight', 'bold') .attr('font-family', 'sans-serif') .attr('text-anchor', 'middle') .attr('alignment-baseline', 'middle') .attr('fill', 'white'); ``` this adds the text with the id and sets appropriate attributes to it. Observable supports live editing of this code, so if you're interested in changing the appearance of nodes, I suggest trying to tweak this and see if you get your desired results. username_3: do you have recommendations on how to use non-circle nodes? like for example rectangles or ellipses? And do you think it may be possible for the algorithms to respect width and height, such that nodes/edges for example never overlap? Not a feature request (only slightly ;), i'm thinking of implementing it myself but may be helpful if you have some pointers / complexity estimate :) username_2: The general complexity of laying this out with respect to the intended shape and dimension of nodes is very difficult. But I can briefly comment on other other two. 1. non circular nodes: As I commended before, the section in observable where the nodes are created is: ``` // Plot node circles nodes.append('circle') .attr('r', 20) .attr('fill', n => colorMap[n.id]); ``` to make them squares you could do something like: ``` nodes.append('rect') .attr('width', 40) .attr('height', 40) .attr('x', -20) .attr('y', -20) .attr('fill', n => colorMap[n.id]); ``` where it's 40 because for circles you specify readius, and I set x and y to -20 as a way to center them, but there are many ways to accomplish that that ar eindependent of size. Note, that this is just d3, it has nothing to do with my library, so I encourage you to look up more of d3 if you want other things, for example, see the [`d3-shape`](https://github.com/d3/d3-shape) library. 2. For varying node sizes, this is supported somewhat, but it's poor and not super easy to use, but I can explain how to structure it. First because sugiyama used a layered layout, all nodes must have the same height. You can render nodes with different heights of course, but the coordinates they'll be assigned will be evenly spaced, so if you want to make a node taller, you'll have to give all the nodes the same space. In principle, each layer could have a different height, but I haven't implemented that. If you have a suitable idea, the change isn't hard and PRs are very welcome. For width there's a little more flexibility, but it's not super easy to use. There are two independent features that relate that you'll need to use. 1. First is to use [`nodeSize`](https://username_2.github.io/d3-dag/interfaces/_sugiyama_index_.sugiyamaoperator.html#nodesize) this will tell the layout to scale based off the number of nodes, rather than cramming them into a fixed space. Set the height to be the spacing you want between nodes vertically, and the width to be the "approximate" distance you want between nodes horizontally. Note, if you want a gap, you'll want to set the larger than the height of the nodes you want to render. 2. The second it to set the [`separation `](https://username_2.github.io/d3-dag/interfaces/_sugiyama_index_.sugiyamaoperator.html#separation) accessor to take the width you want per node into account. This accessor takes two nodes, and should output the relative spacing you want between them. A good rule of thumb is to structure this as `left_width + right_width`. If you know how large you want to make the nodes already, this should be relatively easy. Note, that this will be called on more nodes than in the original dag, as there are some dummy nodes that correspond to edges that are getting wrapped around other nodes. You'll likely want to treat these as width 0, or maybe as some minimum value, but that choice is up to you. These nodes will have a distinct type so branching off of `node instanceof d3_dag.SugiDummyNode` will tell you if a node is a dummy or not. Hopefully that helps. If you have a good example that illustrates this (preferably with observable, but it doesn't really matter), I'm happy to add it to the examples section. username_0: Thanks a lot for your help. I managed to change the appearance with your advices. It looks great now. username_0: One last thing, is there a way to let the lib decide what would be the best size? If I remove the `.size([w,h])` the node are all on top of another. username_2: It's not really possible for this to set the layout size, as what you consider a node, might change. Conceivably you could do something like look at the elements that were updated from a node, look at the bounding box of those elements and then adjust the layout, but that is not really the way this library or d3 are designed. The good news is that opposite, it seems like you are setting some reasonable size for your nodes, so getting something close to what you want should be possible. The `size([w, h])` argument is meant to be sued if you know the size of the area you want to lay out the nodes in. If you omit it,, it assumes 1, 1, which is probably not what you want. The alternative option is to use `nodeSize([w, h])` which says to lay out the dag assuming nodes are roughly w x h. I assume at some point in your code (or in the observable) you're setting the nodes to some fixed size (40 x 40 in the observable). Then you'll want to set a node width and height of something like 60 to provide ample spacing. The downside of this approach is that you'll need to adjust the size of your rendering environment to the size of the dag that you layout. Hope that helps! username_2: Closing this out since it seems like things are resolved. Feel free to open a new issue or potentially ask on stackoverflow if you have questions like this. Status: Issue closed
NiciDieNase/chaosflix
394489838
Title: Fix automatic selection (Dash) Question: username_0: If you enable automatic selection via dash it just loads forever Answers: username_0: Also it's "choose" and not "chose" username_1: Thanks for the hint about the typo. Regarding the streams, I can't replicate the issue. Are you sure it's not a network issue? username_0: [video](https://drive.google.com/open?id=1giIQOekkPJ1pFJUKRUkg_LZH8pxCMzzB) You can see I can easily stream HD and dash in the browser (yes the network didn't respond once, but that's rare and I reproduced this 10+ times). username_0: there is actually an error about getting https://cdn.c3voc.de/dash/s1/manifest.mpd in the video, but this is what I get if I try it manually: ``` ~$ curl -L https://cdn.c3voc.de/dash/s1/manifest.mpd <?xml version="1.0" encoding="utf-8"?> <MPD xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:mpeg:dash:schema:mpd:2011" xmlns:xlink="http://www.w3.org/1999/xlink" xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011 http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-DASH_schema_files/DASH-MPD.xsd" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="dynamic" minimumUpdatePeriod="PT6S" suggestedPresentationDelay="PT6S" availabilityStartTime="2018-12-28T19:00:07Z" publishTime="2018-12-29T01:52:37Z" timeShiftBufferDepth="PT10M0.0S" minBufferTime="PT12.0S"> <ProgramInformation> </ProgramInformation> <Period id="0" start="PT0.0S"> [...] </Period> </MPD> username_1: What device/android-version/etc. are you using? username_0: OnePlus 3T Android 8.0 username_1: This could be an Issue with exoplayer. Since I can't replicate the issue myself I'm not sure how I can procede.
kevxo/rails_practice
970514249
Title: User Story 14, Child Update (x2) Question: username_0: As a visitor When I visit a Child Show page Then I see a link to update that Child "Update Child" When I click the link I am taken to '/child_table_name/:id/edit' where I see a form to edit the child's attributes: When I click the button to submit the form "Update Child" Then a `PATCH` request is sent to '/child_table_name/:id', the child's data is updated, and I am redirected to the Child Show page where I see the Child's updated information<issue_closed> Status: Issue closed
puppetlabs/r10k
38771346
Title: r10k puppetfile utility has no option to *not purge existing modules Question: username_0: Add an option to the r10k puppetfile utility to not purge. By default it purges all modules in the directory you specify or in the local ./modules directory which it is ran. There is no option to not purge, which would be useful. Answers: username_1: Being able to specify a module as local would be very useful for me too. username_2: +1 for option to not purge at all or to exclude specific files/directories from removal. username_3: Like many users that have already commented on this, it would be of great benefit to be able to configure r10k to *not* automatically purge at all, or only purge a specified set of files/directories. username_4: Right now r10k will purge the moduledir, but you can have r10k install modules from the Puppetfile into a secondary directory (https://github.com/puppetlabs/r10k/blob/master/doc/puppetfile.mkd#moduledir) so that your modules directory will not be modified; this can be used as a workaround until I can properly address this. username_4: There are now two issues in JIRA for this - [RK-149](https://tickets.puppetlabs.com/browse/RK-159) which tracks the `local` module type which is a short term fix, and [RK-159](https://tickets.puppetlabs.com/browse/RK-159) which tracks the long term fix of not purging modules created by Git (or SVN). At this point, I would like to just talk about the behavior of `r10k puppetfile install` - Given that GH-92 was fixed, is there a use case for having something like `r10k puppetfile install --no-purge` ? If you're installing modules out of a Puppetfile into a local directory, what are the circumstances that you would have contents in the Puppetfile managed directory that r10k did not place there? username_5: @username_4 that's exactly what I would want from this. A --no-purge option would be great.
oehf/ipf
592503190
Title: Allow to configure TLS parameters for ATNA audit independently of the system properties Question: username_0: Allow to configure TLS parameters for ATNA audit independently of the system properties. Even though this is mostly not necessary, outgoing connections may differ with regard to the encryption parameters etc.<issue_closed> Status: Issue closed
Putnam3145/fortbent
246258090
Title: Move each sburb class/aspect combo into newly written class system Question: username_0: Separate issue mostly for progress tracking--not having to put everything into one commit is a boon. Status: Issue closed Answers: username_0: Done with https://github.com/username_0/fortbent/commit/131124fd621d6f627abff0f3c1ec81c9abba5fc5, pending bugfixes.
stellar/go
728310628
Title: Captive core's online mode fails to start at ledger 1 Question: username_0: In the context of adding captive core integration tests (#3153 ) I found that captive core's online mode (with and without `run-from`) fails to start from ledger 1: This is a special case of #3155 , which fails event with the fix at #3156 . Most relevant log lines: ``` GA76T [default INFO] * GA76T [default INFO] * Target ledger 1 is not newer than last closed ledger 1 - nothing to do GA76T [default INFO] * If you really want to catchup to 1 run stellar-core new-db GA76T [default INFO] * [...] time="2020-10-23T12:28:04.740Z" level=error msg="Error in ingestion state machine" current_state="resume(latestSuccessfullyProcessedLedger=1)" error="Error running processors on ledger: Protocol version not supported: Error getting ledger: unmarshalling framed LedgerCloseMeta: unmarshalling XDR frame header: xdr:DecodeUint: EOF while decoding 4 bytes - read: '[]'" ``` <details> <summary>Full log context (using `run-from`)</summary> <p/> ``` time="2020-10-23T12:27:59.708Z" level=error msg="Error in ingestion state machine" current_state="resume(latestSuccessfullyProcessedLedger=1)" error="error preparing range: opening subprocess: error calculating ledger and hash for stelar-core run: error trying to read ledger header 1 from HAS: error opening ledger stream: Bad HTTP response '404 File not found' for GET 'http://localhost:1570/ledger/00/00/00/ledger-0000003f.xdr.gz'" next_state=start pid=196 service=ingest time="2020-10-23T12:28:00.708Z" level=info msg="Ingestion system state machine transition" current_state="resume(latestSuccessfullyProcessedLedger=1)" next_state=start pid=196 service=ingest time="2020-10-23T12:28:00.711Z" level=info msg="Resuming ingestion system from last processed ledger..." last_ledger=1 pid=196 service=ingest time="2020-10-23T12:28:01.711Z" level=info msg="Ingestion system state machine transition" current_state=start next_state="resume(latestSuccessfullyProcessedLedger=1)" pid=196 service=ingest time="2020-10-23T12:28:01.715Z" level=info msg="Released ingestion lock to prepare range" pid=196 service=ingest time="2020-10-23T12:28:01.715Z" level=info msg="Preparing range" ledger=2 pid=196 service=ingest <startup> [default INFO] Config from /opt/stellar-default/standalone/core/etc/stellar-captive-core.cfg <startup> [default INFO] Generated QUORUM_SET: { "t" : 1, "v" : [ "standalone_1" ] } GA76T [default INFO] Starting stellar-core stellar-core 15.0.0~rc1 (b0de6c17bb61f618359e07f8e4a76f6b050af59a) GA76T [Database INFO] Connecting to: sqlite3://:memory: GA76T [Ledger INFO] Streaming metadata to file descriptor 3 GA76T [SCP INFO] LocalNode::LocalNode@GA76T qSet: d05a3b GA76T [default INFO] Listening on 127.0.0.1:21626 for HTTP requests GA76T [default INFO] * GA76T [default INFO] * The database has been initialized GA76T [default INFO] * GA76T [Database INFO] Applying DB schema upgrade to version 13 GA76T [Database INFO] DB schema is in current version GA76T [Ledger INFO] Established genesis ledger, closing GA76T [Ledger INFO] Root account seed: SC5O7VZUXDJ6JBDSZ74DSERXL7W3Y5LTOAMRF7RQRL3TAGAPS7LUVG3L GA76T [History INFO] Archive 'standalone_1' has 'get' command only, will not be written GA76T [History WARNING] No writable archives configured, history will not be written. GA76T [Ledger INFO] Starting up application GA76T [default INFO] Connection effective settings: GA76T [default INFO] TARGET_PEER_CONNECTIONS: 8 GA76T [default INFO] MAX_ADDITIONAL_PEER_CONNECTIONS: 64 GA76T [default INFO] MAX_PENDING_CONNECTIONS: 500 GA76T [default INFO] MAX_OUTBOUND_PENDING_CONNECTIONS: 56 GA76T [default INFO] MAX_INBOUND_PENDING_CONNECTIONS: 444 GA76T [Ledger INFO] Last closed ledger (LCL) hash is 76df1b9ff762a7a8af1f3c88549fe742e488e80fde561e34b1b6115bdac8d397 GA76T [Ledger INFO] LCL is genesis: [seq=1, hash=76df1b] GA76T [Ledger INFO] Assumed bucket-state for LCL: [seq=1, hash=76df1b] GA76T [Ledger INFO] Changing state LM_BOOTING_STATE -> LM_CATCHING_UP_STATE GA76T [default INFO] * GA76T [default INFO] * Target ledger 1 is not newer than last closed ledger 1 - nothing to do GA76T [default INFO] * If you really want to catchup to 1 run stellar-core new-db GA76T [default INFO] * [Truncated] GDFMI [Ledger INFO] LCL is genesis: [seq=1, hash=76df1b] GDFMI [Ledger INFO] Assumed bucket-state for LCL: [seq=1, hash=76df1b] GDFMI [Ledger INFO] Changing state LM_BOOTING_STATE -> LM_CATCHING_UP_STATE GDFMI [default INFO] * GDFMI [default INFO] * Target ledger 1 is not newer than last closed ledger 1 - nothing to do GDFMI [default INFO] * If you really want to catchup to 1 run stellar-core new-db GDFMI [default INFO] * GDFMI [default INFO] Application destructing GDFMI [default INFO] Application destroyed panic: send on closed channel goroutine 11824 [running]: github.com/stellar/go/ingest/ledgerbackend.(*stellarCoreRunner).start.func1(0xc00080e630) /Users/fons/stellar-go/ingest/ledgerbackend/stellar_core_runner_posix.go:41 +0x5f created by github.com/stellar/go/ingest/ledgerbackend.(*stellarCoreRunner).start /Users/fons/stellar-go/ingest/ledgerbackend/stellar_core_runner_posix.go:40 +0x194 starting horizon... ``` </details> Status: Issue closed Answers: username_2: See: https://github.com/stellar/stellar-core/issues/2778#issuecomment-722702757
MarimerLLC/csla
93418207
Title: ApplicationContext.ClientContext disappears in mobile data portal host Question: username_0: Same as #381 but fix for version 4.5.x. Discussion of issue is here: https://github.com/MarimerLLC/cslaforum/issues/3 Status: Issue closed Answers: username_1: The issue has come back, relinking the discussion: https://github.com/MarimerLLC/cslaforum/issues/19
Homebrew/homebrew-cask
337610349
Title: have travis check for appcasts (eg. find_appcast) Question: username_0: ### Description of feature/enhancement When travis runs CI, have it check the cask for an appcast. ### Justification Frequently there is a better appcast (devmate, etc) than the one listed in the cask... however as a contrib I am not going to install every cask just to run the `find_appcast` tool so I can check. Since travis has to unpack every cask anyway, why not have travis also check to see if there is an appcast available? ### Example use case any cask submission/addition. Status: Issue closed Answers: username_1: @username_0 This is something I've been thinking about, it would be easier to implement as part of https://github.com/Homebrew/homebrew-cask/issues/48383. Closing in favour of that issue.
snowplow/snowplow
25445199
Title: Clean up enrich.common.CanonicalInput based on Thrift SnowplowRawEvent Question: username_0: The schema for raw events has evolved a little with the new Thrift SnowplowRawEvent. Let's tidy up CanonicalInput (including a rename) to reflect learnings fomr Thrift SnowplowRawEvent. Answers: username_1: Migrated to https://github.com/snowplow/enrich/issues/198 Status: Issue closed
louking/members
564089940
Title: leadership task module Question: username_0: Interests * all database items are separated into "interests" e.g., the name of a club Security Roles * leadership-admin is is given access to create leadership task items * leadership-member is given access to leadership task Leadership Task * leadership task checklist is displayed to leadership-member (table display) * task is selected, and can be marked as completed * items in the checklist are prioritized, with higher priority items displayed first * items may have subtypes, with differing subtypes causing "edit task" display to show different fields * task display may include (e.g.) link to document which must be read in order to mark task completed, etc. * there may be metadata which needs to be collected for a particular task (e.g., conflict of interest information) * leadership-admin can set period for task (e.g., conflict of interest may be required every other year, safe sport every year), after which the task is shown once again as uncompleted to the leadership-member Answers: username_0: all features build with 260dfa250e5f9acdf047f31fca499ebe08e5ecc8 Status: Issue closed
pajaydev/ebay-node-api
619525680
Title: Can you find items by seller? Question: username_0: Hello, I want to get all the items for a specific seller on ebay. I checked the ebay api and there is a call findItemsIneBayStores. However, I tried using with your library so far it didn't work. Can I find items by a specific seller with your library? Answers: username_1: @username_0 I think the best api you can go with is `findItemsIneBayStores`, will integerate this library with `findItemsIneBayStores`. Thanks for raising this issue. username_0: Yes. Are you planning to update the library with this feature? username_1: @username_0 yes, I will try to publish it tonight. username_0: Oh man that is totally great news! If you added the ability to sell items, that would be the greatest. (Things to consider) username_2: @username_0 yes the selling API is in the works username_1: @username_0 I can see that there is an existing API that supports to search based on the seller name. Kindly check with the below API. ```javascript let ebay = new Ebay({ clientID: clientId, }); ebay.findItemsAdvanced({ Seller: 'batterygallery' }).then((data) => { console.log(data); }, (error) => { console.log(error); }); ``` username_0: Hmm, I tried to use findItemsAdvanced but got error: TypeError: ebay.findItemsAdvanced is not a function How were you able to use this call? username_1: @username_0 which version are you using ?. You can see the reference here https://github.com/username_1/ebay-node-api/blob/f3bee06a072f8b56e0be13362374fb2afe0dbd0e/demo/findingApi.js#L62 username_1: @username_0 I have integerated with `findItemsIneBayStores`. Kindly install the latest version ``` npm install [email protected] ``` you can use this method https://github.com/username_1/ebay-node-api/blob/master/demo/finding.js#L81 username_1: Closing this issue, kindly create issue if you face any other issues. Status: Issue closed
vercel/next.js
1125038824
Title: NextJS 11.1.4 - conflicting extension of react 'module' declaration Question: username_0: ### Run `next info` (available from version 12.0.8 and up) Working on a library - this does not easily run the next command since there's not next configs. typescript - 4.4.2 nextJs - 11.1.4 react - 17.0.2 ### What version of Next.js are you using? 11.1.4 ### What version of Node.js are you using? 12.22.9 ### What browser are you using? N/A ### What operating system are you using? macOS ### How are you deploying your application? All next options ### Describe the Bug When building typescript libraries that extend NextJS 11.1.4 and use react 17.0.2, I receive the following error: ``` node_modules/next/types/index.d.ts:46:5 - error TS2717: Subsequent property declarations must have the same type. Property 'loading' must be of type '"eager" | "lazy" | undefined', but here has type '"auto" | "eager" | "lazy" | undefined'. 46 loading?: 'auto' | 'eager' | 'lazy' ~~~~~~~ ../../node_modules/@types/react/index.d.ts:2156:9 2156 loading?: "eager" | "lazy" | undefined; ~~~~~~~ 'loading' was also declared here. ``` The cause of this is that NextJS bundles extensions to the react type definitions BUT it conflicts with type-checking. For now, I will be using skipLibCheck, but that is a nuclear option, since it skips all lib checking. This issue was [previously closed](https://github.com/vercel/next.js/issues/29071) with the mention that it was fixed in the canary last year. I'm filing this bug since it doesn't look updated. Could we release a patched version that doesn't conflict with react types? ### Expected Behavior Typescript transpiles everything correctly, and uses the appropriate type declarations from react instead of NextJS. ### To Reproduce Create a typescript project that makes user of both the 'next' package and the react package. ```typescript // index.ts import React, { useState } from 'react'; import type { Redirect } from 'next'; export const MyComp : React.FC = ({ children }) => { const redirect = useState<Redirect>({ statusCode: 301, destination: 'some-destination', }); return (<div>This component is using both next and react packages</div>); }; ``` Once you've created the project: ```bash tsc ``` You should get the same failure as above (as long as skipLibCheck does not equal true in your tsconfig.json). Answers: username_0: As a temporary solution that doesn't make me skipLibChecks on everything, I am using ignore-bad-ts. I've added it as a yarn 3 plugin in my project, but you could easily call it via cli ``` npx ignore-bad-ts "node_modules/next/types/index.d.ts" ``` IMPORTANT: the package just adds ts-nocheck to the top of files that match the glob. So you will want to clean your packages if you install a fixed version of NextJS and remove the ignore-bad-ts calls. I prefer this though, so that I still get my typechecking for everything else except next. username_1: I don't think we are going to backport patches like this. Upgrading to Next 12 solves the issue, with the one caveat that if you want `"skipLibCheck": false`, you might need to install `@types/react-dom` as a dev dependency as well. Status: Issue closed username_0: Just to be clear, changing a type definition for a still-supported version of NextJS is considered a breaking change. I'm not sure what all the NextJS eco-system is reliant on, but that seems a little drastic. If this is the final resolution, for anyone who comes on this and doesn't want to disable all typechecking, please see my above workaround and keep an eye out for new NextJS releases that might implicitly fix this. :D username_1: I might have been unclear. This has been fixed already in Next.js 12 and upwards, no need for PR or any further fix. What I meant was that it is fairly easy to upgrade between major releases of Next.js, since the breaking changes are kept at a very low level, making it a very stable framework. I did not say that a type fix would be a breaking change, only that we are not implementing patches/fixes/features on older versions of Next.js unless it addresses a security issue.
malko/rocketchat-jira-hook
181619141
Title: read-only error on Snap installation Question: username_0: The problem depends most on a not separate dir for Cache & Temp files which is write-able. Any way to set a different .babel-cache URL in the Script? Status: Issue closed Answers: username_1: This is not related to this script you should report to the rocket chat project instead or to the snap package maintainer
sonata-project/SonataDoctrinePhpcrAdminBundle
70218030
Title: PhpcrOdmTree compatibility with Symfony 2.7 Question: username_0: With Symfony 2.7, the `templating.helper.assets` service now uses the `Symfony\Bundle\FrameworkBundle\Templating\Helper\AssetsHelper` class rather than the deprecated `Symfony\Component\Templating\Helper\CoreAssetsHelper` class. Since this service is passed as an argument of the `sonata.admin.doctrine_phpcr.phpcr_odm_tree` service (https://github.com/sonata-project/SonataDoctrinePhpcrAdminBundle/blob/master/Resources/config/tree.xml#L24), the related PhpcrOdmTree constructor argument type hint (https://github.com/sonata-project/SonataDoctrinePhpcrAdminBundle/blob/master/Tree/PhpcrOdmTree.php#L117) should be removed and additional logic added to check if the input is an instance of one of these classes. Also, with the new `assets.packages` service in Symfony 2.7, it's possible that an instance of `Symfony\Component\Asset\Packages` or `Symfony\Component\Asset\Package` could be passed in to the constructor as well. Each of these classes has a getUrl() method which appears to be the only requirement of the PhpcrOdmTree class. Answers: username_1: thanks for investigating these issues @username_0 for the upcoming 2.0 of this bundle, @username_2 is refactoring a lot of the setup. i expect some of the issues to be irrelevant for that. @username_0 if you could do a pull request with those changes (remove typehint, add instanceof checks) against the 1.2 branch of this bundle, that would be great! username_2: @username_1 can you maybe create a list of issues related to the tree browser. They seem to be spread across many repositories, many of them not followed by me. I expect most of them to be no longer valid, as both frontend and backend are completely revisited. So we have to go through all of them when tree browser 2.0 is in beta/rc username_0: I created the related pull request https://github.com/sonata-project/SonataDoctrinePhpcrAdminBundle/pull/337. username_3: so this can be closed? username_4: *This issue can be closed, the fixing PR #337 is merged in 1.x and 2.0 doesn't have the `PhpcrOdmTree` class anymore.* Status: Issue closed
evennia/evennia
102483517
Title: API Question: username_0: Use the likes of something like the Django REST Frame to create an API for external utilities that don't require being part of the current process. This should be extensible/overridable to allow developers to add their own calls or change the existing ones. Some example applications: 1. Collecting gamewide stats 2. Externally run scripting languages 3. Integration with external services on a push basis instead of a pull Answers: username_1: @username_0 Could you elaborate some more on what you mean by this and what kind of changes your are suggesting and their effects, maybe with links to relevant django docs? username_0: @username_1 http://www.django-rest-framework.org/ contains information about the Django REST framework. Having this would allow for more straightforward JS integration. It might be possible, since they both use JSON, to reuse OOB code in some way so that a normal game connection isn't required for certain calls. You could have a call that moves an object, sends a message, lists owned items, deletes objects, etc. username_1: @username_0 Hm, that does sounds pretty nifty! I'm thinking I might put up a devel branch of my current reworking of the web client infrastructure for input from more people, maybe also for things like this; one still needs OOB for traditional telnet clients but the webclient is using plain JSON in my most recent implementation and who knows what other angles one might catch. username_1: @username_0 I can see this being useful for allowing powerful extra features and effects on the database. I'm unsure about security though. Do you have any experience with how to do this in a safe way (can't allow just anyone to do stuff with the database after all)? username_0: @username_1 The Django REST Framework, as linked earlier, has authentication support, with proper token handling, etc. You can plug in your own authentication and have permissions checks done based on locks if needed. username_2: Going through Django REST Framework documentation plus this tutorial here: https://code.tutsplus.com/tutorials/beginners-guide-to-the-django-rest-framework--cms-19786 gives a really nice and easy picture of how you can start building an API on top of all of the Evennia functionality. Using DRF, everything will be 100% portable to the API. All that seems to be required is writing up the Serializers, defining exportable fields, and then editing will have to be written in. It will be a lot of code, but none of it is particularly difficult. username_2: Important features for security (looking above at username_1's concern) are CSRF and CORS. You could also use something like JWT in place of DRF's default choices for some additional security (this would use either a private key or a secret in order to encrypt the token object). With only tokens you run into the possibility of someone doing session hijacking potentially from links on forums or whatever. The only complicated piece of this is when Evennia allows for multi sessions from multiple IPs, that is considered a poor security practice, but it could have to be enforced on the web as well. username_1: @username_2 So were you working on this? username_3: Anybody play around with GraphQL? http://graphql.org/ Here's a python-django graphQL server implementation: http://docs.graphene-python.org/projects/django/en/latest/tutorial-plain/ username_1: @username_3 No, but graphQL looks quite nice at a glance. :) username_4: My team's boss is going to have me start working with GraphQL starting in about March. username_3: From what I understand about Evennia typeclasses, there's DB and NDB attributes.. The NDB attributes are for ephemeral data.. If a layer was created to directly query the database, it wouldn't be able to query NDB data? I'm assuming the single source of truth for the game state would be in the django models memory, and not the DB right? If so, and an external API were to be created, and if the requirements were to make it match the exact game state at the time of query, it would have to be hooked up with the Object class? Im just kinda asking around and throwing ideas around, ever so slowwwwly getting acquainted with GraphQL. With GraphQL you can write your own 'resolves' which resolve data from whatever source you want, which then sends the data back to the requestor. username_1: @username_3 The game state does depend on in-memory data yes. I think it'd be safe to say that NDB attributes should not be something you should be able to query over an API though. But yes, stuff like cache state etc will always be a concern. username_5: I'd like to try to do some work on this for Hacktoberfest now that I have more experience with django rest framework and GraphQL. Django Rest Framework is a very natural add for any django project that will ever have an external API. My experiences with Graphene have been a bit mixed - I've found it to make projects harder to maintain due to the difficulty of mapping queries to resolvers, and the learning curve for developers is steep - for a project that's welcoming to new developers graphql may not be a good fit. It's very useful when your core audience are all engineers who will use your API in unexpected ways so you aren't constantly adding more endpoints for new queries, but you can get similar flexibility with different filterset libraries for DRF even if the syntax is usually a bit more kludgy. I could add an example Graphene query/resolvers, but I think it'd be unlikely to ever be used, to be honest. I'll probably focus on adding reasonable starting endpoints/serializers for common tasks and documenting how a developer could add more for their game. username_1: @username_5 Cool, yes I think focusing on DRF is probably the sanest approach for our use case. Having thought about this, I think that this implementation should not be too hard; there are a few needs people have from such an API that needs to be considered (this is partly me thinking aloud so can be discussed): - Accessing/editing generic, named typeclass data in an authenticated way. The most traditional thing would be to use DRFs ModelSerializers to have one view for ObjectDBs, one for AccountDBs etc. In the typeclass system this is a little more involved - at the very least Attributes (Nicks) and Tags (Aliases) needs to be included - it'd be a pain to have to call those models separately (even though that may also be something possible to do). - There should ideally be a way to retrieve custom models created by the user without them having to make a new serializer/view for them. I think this must be possible to generalize (but once you go there you may also consider why you'd want separate endpoints for the default ObjectDB/AccountDB/etc at all - why not just specify exactly which datatable and typeclass you want in the call ...? - People making custom clients want some generic way to send commands to the server. As a first approximation, this could be a request that gets translated into an evennia `inputfunc`. Problem is the return - one could imagine the API endpoint waiting for the command to finish (really treating the Command class like a view!) and catch the `outputfunc` (sent by `msg()`) as the return value. One would then need to rework all default commands to only send one `msg()` ever. Alternatively, one could send the command, return some identifier and then have another endpoint that the caller can poll to retrieve the result. The latter will be highly inefficient though. The third option is to *not* do this kind of thing with a REST-API but to make a new (or expand the existing) websocket with a more formal API spec for people to call. I'm leaning towards the third option honestly, since Commands can be highly non-suitable for the idea of a RESTful interface (keeping state, `@interactive` mode etc). - Could you elaborate on which 'common' tasks you'd focus on? username_5: Definitely agree on wanting models and Attributes to be included - they pretty much have to be due to so much data being stored there for objects. Hmm. For the second part we could probably do something like that, creating some new serializer on the fly for an arbitrary model based on the natural key. Having all fields included by default makes me frown a little bit due to possibly exposing private data without users realizing it, but there isn't really any way to avoid that for a generic serializer that doesn't know about the model. I'll see what I can do, though a thorough how-to guide on writing their own serializers/views might be more beneficial than some sort of elaborate factory class. I'm not sure about commands - I think the set of commands that people wouldn't mind waiting for a finish response for is probably larger than truly interactive stuff where websockets are a much better choice. For 'common' tasks I was really just referring to standard CRUD views for the different models. I figured I'd start simple. username_1: This is now implemented in `develop` branch. username_1: Closing this issue since it's implemented in `develop` branch, for better overview. Status: Issue closed
Varixolust/Sales-Invoice-Interface
726854186
Title: Program no t runing correctlyly due to coding error Question: username_0: Review this code [CSC512_Assignment_2 [10%].pdf](https://github.com/username_0/Sales-Invoice-Interface/files/5418650/CSC512_Assignment_2.10.pdf) The requirements are here [BEV Bakery.zip](https://github.com/username_0/Sales-Invoice-Interface/files/5418646/BEV.Bakery.zip)
ivpusic/react-native-image-crop-picker
270861127
Title: undefined is not an object (evaluating '_reactNativeImageCropPicker2.default.openPicker') Question: username_0: ### Version - react-native-image-crop-picker latest - react-native 0.49.0 ### Platform - Android react-native run-android after throw these error what can i do?? thanks. undefined is not an object (evaluating '_reactNativeImageCropPicker2.default.openPicker') <unknown> E:\reactnative_project\test1\MyApp\views\LoginPage.js:136:28 onComplete E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\Animated\src\AnimatedImplementation.js:2172:31 <unknown> E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\Animated\src\AnimatedImplementation.js:897:29 __debouncedOnEnd E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\Animated\src\AnimatedImplementation.js:133:19 onUpdate E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\Animated\src\AnimatedImplementation.js:341:28 callTimer E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\Core\Timers\JSTimersExecution.js:98:17 callTimers E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\Core\Timers\JSTimersExecution.js:138:34 __callFunction E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:260:47 <unknown> E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:101:26 __guard E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:228:6 callFunctionReturnFlushedQueue E:\reactnative_project\test1\MyApp\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:100:17 Answers: username_1: +1 username_2: installation failed. there are a number of similar issues, for example, https://github.com/username_2/react-native-image-crop-picker/issues/459 Status: Issue closed
egoist/poi
265537125
Title: Export webpack.config.js Question: username_0: Feature request: I'd love poi to be able to export the webpack config it uses when it runs live or builds. I have a few projects where I am required to have that webpack config for deployment (only a limited subset of npm packages available), and while I love to use poi for development, being able to export webpack.config.js would be fantastic. Answers: username_1: I don't get this, do you mean that you can't install poi for deployment because of only a limited subset of npm packages available? --- currently it's impossible to export raw webpack config file because we're using webpack-chain internally. username_0: Yep sorry, the deployment image I have to use is prebuilt, and the process to add an npm package is a pain. It's a silly artificial barrier, but changing it will take a while :) any rate, figure this sort of prtability might be useful to others as well. username_1: @username_0 even if you got a webpack config file you still need to install webpack and plugins to run the build process 😅 username_0: I understand that, and the image should have the plugins I need, and has webpack 3.something, already frozen. username_0: Id like to see what poi is using so I can line up the actual webpack config where I cannot install poi as closely as possible. If it's not an interesting feature, feel free to close this up :) On Oct 14, 2017 8:14 PM, "<NAME>" <<EMAIL>> wrote: > I understand that, and the image should have the plugins I need, and has > webpack 3.something, already frozen. > > Status: Issue closed username_1: yeah I think it's more reasonable to update your deployment image instead 😄
amarmesic/windows-phone-navigation-drawer
53578487
Title: DataBinding IsDrawerOpen fails + patch Question: username_0: In order to access the current IsDrawerOpen state, I had to Databind it into my ViewModel. But the IsDrawerOpen was implemented as a normal Property, thus TwoWay Databinding failed with an exception. To allow TwoWay DataBinding in XAML it is required to implement an Dependency Property. I have added it. I am not very familiar with git pull requests, so I just created a patch file (it's a very small change anyways) The Patch file can be found https://gist.github.com/username_0/6581d1ccc6eed5de3579 Then I can DataBind my property to my ViewModel and my ViewModel is always aware then the drawer state changes ``` IsDrawerOpen="{Binding Mode=TwoWay, Path=IsOpen, UpdateSourceTrigger=PropertyChanged}" ``` I needed this to make it work with Prism, as Prism has it's own Navigation Manager and was closing my Page which contained the navigation, when the drawer was opened, which was very undesired. Answers: username_1: Thank you @username_0 Patch applied. Status: Issue closed
mlox/mlox
203445572
Title: OpenMW support Question: username_0: Originally by Translator5. Transferred when the project moved. So it there a possibility to make python version sort the OpenMW config? Or is this more complicated because of different Linux distributions and different install locations? Linux Mint 18 64 bit OpenMW 0.40 Morrowind GOTY How to sort mods for OpenMW on Linux -> Run Morrowind Launcher.exe using wine -> Enable your mods there -> run mlox using wine or the python version -> run OpenMW launcher & import the morrowind.ini settings & loadorder --- If you want to look at it, here's a possible starting point: https://github.com/mlox/mlox/blob/master/mlox/mlox.py#L1079 ---- Side note, if you use the mlox-working branch you don't need "Morrowind Launcher.exe" It's not the easiest, but it works Your directory structure should have: * Morrowind.ini * "Data Files/" * "mlox/" Here's what to do: 1. Make sure `[game files]` in morrowind.ini has no entries 2. Run mlox, and save the resulting load order. 3. Open the file with the new load order, and add "content=" to every line (search and replace is great for this) 4. Copy those lines to `openmw.cfg` ---- This is much more stuff to do 😆 - I think I will use wine instead Answers: username_1: Yes this would be a very useful feature! I would really like to see support for OpenMW :) username_0: Can someone please post an openmw.cfg with at least one active plugin. I can access the system with my OpenMW install on it right now. username_2: Here's one from Amenophis on Bethsoft [openmw.txt](https://github.com/mlox/mlox/files/849050/openmw.txt) username_3: Any news on this? username_3: Sorry for the above question. I just tried to load my openmw.cfg and it worked. You might want to close this issue though. It might confuse someone to assume it's not implemented yet as the issue is in the 'Open' state. username_0: Hi, Did you use the command line? If so, what options did you use? What operating system are you using? While File read support had been implemented for a while now, you may be surprised by attempting a write operation, as the file writer is designed to handle Morrowind.ini files. We currently don't support opening a random file, and reading from a data directory. As such, warnings based on size will now be shown, and Mlox will not scrub your configuration file of any missing plugins. Openmw support will be completed when a user can open the GUI, and can easily choose to use the openmw.cfg file with their data directory. I will, however, add milestones so everyone can track the progress to that goal. username_3: Hi, I'm using Windows, and I've loaded the config using UI, not command line. I do agree that this issue is not done to the level of quality that should be expected, as saving an openmw.cfg doesn't work. However, adding milestones would be great, as I almost didn't try to load the config at all, and I can imagine many people wouldn't. Thanks for the quick reply! username_4: I would really love to get this working with OpenMW.
git/git-scm.com
144170177
Title: No support for graphemes in regexps Question: username_0: Git doesn't support `\X` (grapheme) in regexps passed to e.g. `git diff word-diff-regex='…'` (tested on `2.7.4` installed on OS X with `brew install git --with-pcre --with-blk-sha1 --with-brewed-openssl --with-brewed-svn --with-persistent-https`). Answers: username_1: This issue tracker is dedicated to the git-scm.com website. The issue you have raised is related to the git program. If you think there is a bug in the git program, please, send your issue to the git mailing list <EMAIL> username_0: Dooh. Thanks for the heads up! Status: Issue closed
dilevin/computer-graphics-ray-casting
400952732
Title: Expected output of inside-a-sphere.json Question: username_0: I was wondering if we were supposed to display rays that intersect with inside of the sphere as in this image: ![image](https://user-images.githubusercontent.com/9597842/51421298-50f18680-1b6a-11e9-8778-168deb0f23e1.png) Or if we are only supposed to display rays that intersect with the outside of the sphere as in this image: ![image](https://user-images.githubusercontent.com/9597842/51421315-8a29f680-1b6a-11e9-9be0-446c79b151c2.png) Answers: username_1: The desired output, whether it be ID, normal or depth should match the images in the readme. username_2: But the readme only have the sample output for some scenes which doesn't include the scene mentioned above (inside-a-sphere), is it possible for us to get the sample output for all the scenes? username_3: Hi @username_2 @username_0 - please see the reference images for all scenes, added in commit 301399a. Hopefully these clarify what the desired output is. https://github.com/username_1/computer-graphics-ray-casting/tree/fix/images/output_images Status: Issue closed
MicrosoftDocs/Virtualization-Documentation
327753624
Title: #enable-the-hyper-v-role-through-settings Question: username_0: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v#enable-the-hyper-v-role-through-settings - This step "#enable-the-hyper-v-role-through-settings" was already checked. - Using "Right click on the Windows button and select ‘Apps and Features’." Yields no option to that end. I ended up getting there just by typing into "Win + S" the letters "Hyp" and then can click the "Turn Windows Features on or off" again to be clear it was not in "Apss and Features" Freshly installed Win 10 Enterprise 2018 release. Also are these two paths doing the same thing? One with PS and one with UI? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 1df65b2d-24ba-71b0-0648-14c21f5b1b26 * Version Independent ID: a89c84df-6ef7-66a9-eec1-35ba94afdb1b * Content: [Enable Hyper-V on Windows 10](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v#enable-the-hyper-v-role-through-settings) * Content Source: [virtualization/hyper-v-on-windows/quick-start/enable-hyper-v.md](https://github.com/Microsoft/Virtualization-Documentation/blob/live/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v.md) * Service: **unspecified** * Product: **windows-10-hyperv** * GitHub Login: @username_2 Answers: username_1: Same issue here with Windows 10 Pro - fresh install, Hyper-V enabled from the control panel (Turn Windows Features On/Off). Could not find the HyperV manager. username_2: Interesting. @username_1, do you have powershell management tools? (check by running `Get-VM` as administrator) @username_0 - yes, the PowerShell instructions and the Windows Features instructions do the same thing. Do you have all boxes under Hyper-V checked? Both Server and Management tools? username_1: Sadly, I don't have that setup anymore. What I did was uncheck the box in the Windows Features, restart, recheck the box, restart and I was able to find HyperV manager. I suspect that the software updates made right after the fresh install removed the HyperV manager, but I can't confirm (nor can try to reproduce) username_3: I'm about to install [Remote Server Administration Tools for Windows 10](https://www.microsoft.com/en-us/download/details.aspx?id=45520) which from what I read, should solve the problem. username_4: Same problem here. So tired of windows at this point not making everything so hard and changing setting all the time. username_5: Hi, checked the boxes in settings and that did not work. I also tried the commands in the powershell and that did not work either. I was trying to run docker on windows 10 but kept getting the "Hardware assisted virtualization and data execution protection must be enabled in the BIOS" error message. Haaaa? But I checked the boxes and ran the commands in the powershell. I found this article at https://www.laptopmag.com/articles/access-bios-windows-10 and it worked perfectly. I can now run docker on windows 10...
ivanhuay/terminator-syntax
245508613
Title: Deprecated selector in `terminator-syntax\index.less` Question: username_0: In `terminator-syntax\index.less`: Starting from Atom v1.13.0, the contents of `atom-text-editor` elements are no longer encapsulated within a shadow DOM boundary. This means you should stop using `:host` and `::shadow` pseudo-selectors, and prepend all your syntax selectors with `syntax--`. To prevent breakage with existing style sheets, Atom will automatically upgrade the following selectors: * `.comment` => `.syntax--comment` * `.entity.name.type` => `.syntax--entity.syntax--name.syntax--type` * `.entity.other.inherited-class` => `.syntax--entity.syntax--other.syntax--inherited-class` * `.keyword` => `.syntax--keyword` * `.keyword.control` => `.syntax--keyword.syntax--control` * `.keyword.operator` => `.syntax--keyword.syntax--operator` * `.keyword.other.special-method` => `.syntax--keyword.syntax--other.syntax--special-method` * `.keyword.other.unit` => `.syntax--keyword.syntax--other.syntax--unit` * `.storage` => `.syntax--storage` * `.constant` => `.syntax--constant` * `.constant.character.escape` => `.syntax--constant.syntax--character.syntax--escape` * `.constant.numeric` => `.syntax--constant.syntax--numeric` * `.constant.other.color` => `.syntax--constant.syntax--other.syntax--color` * `.constant.other.symbol` => `.syntax--constant.syntax--other.syntax--symbol` * `.variable` => `.syntax--variable` * `.variable.interpolation` => `.syntax--variable.syntax--interpolation` * `.variable.parameter.function` => `.syntax--variable.syntax--parameter.syntax--function` * `.invalid.illegal` => `.syntax--invalid.syntax--illegal` * `.string` => `.syntax--string` * `.string.regexp` => `.syntax--string.syntax--regexp` * `.string.regexp .source.ruby.embedded` => `.syntax--string.syntax--regexp .syntax--source.syntax--ruby.syntax--embedded` * `.string.other.link` => `.syntax--string.syntax--other.syntax--link` * `.punctuation.definition.comment` => `.syntax--punctuation.syntax--definition.syntax--comment` * `.punctuation.definition.string, .punctuation.definition.variable, .punctuation.definition.parameters, .punctuation.definition.array` => `.syntax--punctuation.syntax--definition.syntax--string, .syntax--punctuation.syntax--definition.syntax--variable, .syntax--punctuation.syntax--definition.syntax--parameters, .syntax--punctuation.syntax--definition.syntax--array` * `.punctuation.definition.heading, .punctuation.definition.identity` => `.syntax--punctuation.syntax--definition.syntax--heading, [Truncated] * `.punctuation.definition.italic` => `.syntax--punctuation.syntax--definition.syntax--italic` * `.punctuation.section.embedded` => `.syntax--punctuation.syntax--section.syntax--embedded` * `.support.class` => `.syntax--support.syntax--class` * `.support.function` => `.syntax--support.syntax--function` * `.support.function.any-method` => `.syntax--support.syntax--function.syntax--any-method` * `.entity.name.function` => `.syntax--entity.syntax--name.syntax--function` * `.entity.name.class, .entity.name.type.class` => `.syntax--entity.syntax--name.syntax--class, .syntax--entity.syntax--name.syntax--type.syntax--class` * `.entity.name.section` => `.syntax--entity.syntax--name.syntax--section` * `.entity.name.tag` => `.synta Answers: username_1: Great job, thanks for the input username_1: Thanks, I'll check it out
godotengine/godot
363388145
Title: AnimationTree with AnimationNodeStateMachine ignore speed AnimationPlayer Question: username_0: <!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** v3.1 master / c432ce4ee15fc396b2bccbbe2661b5bd34b9bee1 <!-- Specify commit hash if non-official. --> **OS/device including version:** Ubuntu 18.04, 16.04 and Windows 10 <!-- Specify GPU model and drivers if graphics-related. --> **Issue description:** AnimationTree with AnimationNodeStateMachine with Anim Player property set to an AnimationPlayer ignore the speed set in the editor or playback_speed set in code. Also I am not able to set AnimationTree.anim_player.playback_speed property in code too. <!-- What happened, and what was expected. --> **Steps to reproduce:** Create any simple animation with AnimationPlayer change the speed and play with the AnimatinoTree ``` $AnimationTree.get("parameters/playback").start("default"); $AnimationTree.get("parameters/playback").travel("to_right"); ``` **Minimal reproduction project:** <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. --> [AnimationTree.zip](https://github.com/godotengine/godot/files/2413556/AnimationTree.zip) Answers: username_1: This was unexpected for me as well, but I wonder whether that's really a bug or made by design, because I figured I can change the playback speed with `AnimationNodeTimeScale` at any node of the tree. username_0: @username_1 gonna try that, but it looks like a workaround solution since I set an AnimationPlayer all setup. username_0: Is AnimationNodeTimeScale available for AnimationNodeStateMachine? username_0: I guess I am not going to use it for a while, gonna wait this to be fixed. Thanks! username_2: @username_0 you could save your current tree as a resource and load it into a blend tree username_3: My animations have different playback speeds for each animation (idle = 0.5, run = 1.4, jump = 0.8, ect). For example, I had to create a blendtree with an animation node assigned to idle and a time scale node set to 0.5 (even though the animation has this already) and name the blendtree idle. I had to do this for each animation... =( username_4: To the best of my knowledge, there is no proper way to change the speed in the AnimationTree. It doesn't have a time_scale property itself, and all solutions described above are workarounds using other types of tree roots. username_5: @username_4 Please open a proposal on the [proposal repository](https://github.com/godotengine/godot-proposals) if you think this is a missing feature.
jmhodges/howsmyssl
46962672
Title: howsmyssl.com requires vulnerable (old style ssl) negotiation Question: username_0: 1. use firefox 2. in about:config set security.ssl.require_safe_negotiation to true 3. goto https://howsmyssl.com 4. see "Secure Connection Failed An error occurred during a connection to www.howsmyssl.com. Peer attempted old style (potentially vulnerable) handshake. (Error code: ssl_error_unsafe_negotiation)" this should be fixed for a service "that tells you how secure your TLS client is". see also: https://wiki.mozilla.org/Security:Renegotiation#security.ssl.require_safe_negotiation Answers: username_1: This is still not fixed, I'm getting the same error with Firefox 38.x and _security.ssl.require_safe_negotiation_ enabled. I'm using TLS 1.2 with AES (256 bit PFS and GCM chipers). username_2: Firefox 38.0.5 is supposed to be sending 0x00FF TLS_EMPTY_RENEGOTIATION_INFO_SCSV as a cipher suite when requiring safe negotiation per [RFC5746](https://tools.ietf.org/html/rfc5746#page-6) and it does not appear to be doing so. Instead it sends only these: 0xC02B: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", 0xC02F: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", 0xC00A: "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA", 0xC009: "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA", 0xC013: "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", 0xC014: "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", 0x0033: "TLS_DHE_RSA_WITH_AES_128_CBC_SHA", 0x0039: "TLS_DHE_RSA_WITH_AES_256_CBC_SHA", 0x002F: "TLS_RSA_WITH_AES_128_CBC_SHA", 0x0035: "TLS_RSA_WITH_AES_256_CBC_SHA", 0x000A: "TLS_RSA_WITH_3DES_EDE_CBC_SHA", This is on TLS 1.2. This is not a howsmyssl bug, but a Firefox bug. Talk to them about it.
roddux/wordpress-dos-poc
544053160
Title: Response Dump Question: username_0: I'm trying to test this on my server running 5.3.x I never receive any pingbacks through my webhook. I also checked this on a few other remote servers too. I seem to be getting the same response. ``` [>] WordPress <= 5.3.? Denial-of-Service PoC [>] @username_1 2019 | Arcturus Security | labs.arcturus.net [+] Running in check mode [+] Got pingback URL "https://webhook.site/id.etc.etc" [+] Got target URL "https://target.url.etc" [+] Building 2 pingback calls [+] Request: <methodCall data> [+] Request size: 709 bytes [+] Sending check request [+] Request sent [+] Response headers: {'Date': 'x.x.x.x', 'Server': 'Apache', 'X-Powered-By': 'PHP/7.2.23', 'Connection': 'close, Upgrade', 'Upgrade': 'h2,h2c', 'Vary': 'Accept-Encoding,User-Agent', 'Content-Encoding': 'gzip', 'Content-Length': '206', 'Content-Type': 'text/xml; charset=UTF-8'} <?xml version="1.0" encoding="UTF-8"?> <methodResponse> <params> <param> <value> <array><data> <value><struct> <member><name>faultCode</name><value><int>0</int></value></member> <member><name>faultString</name><value><string></string></value></member> </struct></value> <value><struct> <member><name>faultCode</name><value><int>0</int></value></member> <member><name>faultString</name><value><string></string></value></member> </struct></value> </data></array> </value> </param> </params> </methodResponse> [+] Here's the part where you figure out if it's vulnerable because I CBA to code it [>] Check finished ``` Answers: username_1: Hey there @username_0, Thanks for the feedback. Can you supply some more information about your setup? Such as, what plugins do you have active on the WordPress install?
bruceadams/wdscli
236233838
Title: Support environment id to query Question: username_0: For news we should be able to query the read only collection. But existing customers will have 2 read only collections We should also support sending environment id to query `wdscli -c credentials.json query --environment_id -r -m "collection_name"` @bruceadams
sp614x/optifine
369675081
Title: "Fast" bubble columns Question: username_0: During Minecon Earth 2018, Bedrock developer <NAME> mentioned how the particles of bubble columns can be replaced with an animated texture in Bedrock Edition to improve performance. [(Watch it here.)](https://youtu.be/Jb4xSzTNd-I?t=10995) This sounds like something that would be perfect to implement in OptiFine due to the amounts of particles they create. For reference, this is how the bubble columns look like in Bedrock Edition if you disable Fancy Graphics: ![bedrock-bubble-column](https://user-images.githubusercontent.com/17113053/46876192-fa273300-ce3d-11e8-810c-ace1ea7d30a8.gif) Answers: username_1: I wonder how well would it work with resource packs though? Would they need a separate texture for that? Overall I like the idea though, would probably make it easier to debug for streams that are not continuously pushing ("flowing" and "static" water mixed). username_0: Yes, since the block model for the bubble column in Bedrock Edition consists of three textures: - One that is wrapped around the column (the one with the larger and slower bubbles) - One that is wrapped around the vertical pipe in the middle (the one with the smaller and faster bubbles) - One for the top of the bubble column
neo4j-graphql/neo4j-graphql
377131628
Title: How did you install in Neo4j Desktop? Question: username_0: I'm attempting to try GraphQL plugin in Neo4J. Every guide or document indicated that installing GraphQL plugin is a piece of cake. However, the install button is disabled in my version of Neo4j Desktop in whichever way I have tried. (version 1.1.12, Neo4j version 3.4.9) The info message keep saying that "This plugin is not supported by graphs listed below:" Is it due to version mismatch or any other changes in the API? The README document on github page remarked that "This branch for Neo4j 3.3.x". Is this the cause of the failure? Then how can I use Neo4j version 3.3.x (Neo4j desktop 1.1.12 do not permit such downgrade)? On the release page, the plugin version 3.5 (pre-release) has been published. Is there any possibility that I can use this? How can I manually install version 3.5? ![image](https://user-images.githubusercontent.com/1776524/47961744-412ad180-e054-11e8-8545-103ef173b6f5.png) ![image](https://user-images.githubusercontent.com/1776524/47961745-48ea7600-e054-11e8-8bb0-a9102eb52fdb.png) Using GraphQL in my project is expected to widen many possibilities so that I sincerely want to apply GraphQL to Neo4j database. Thanks Answers: username_0: Forget it... The version number of GraphQL plugin may have to be matched with the Neo4j database version number, I guess. (Current version v.3.4.9 does not work with current GraphQL plugin, therefore it cannot be installed.) When you downgrade db version to 3.4.0, it works flawlessly. Thanks. Status: Issue closed
maiertech/gatsby-themes
721057216
Title: Create gatsby-theme-blog Question: username_0: # Requirements - Try to test anything in `gatsby-node.js`. - Use Gatsby file system API to generate pages. - Make compatible with `gatsby-plugin-tags`. # Frontmatter - `title` - `author` - `date`<issue_closed> Status: Issue closed