repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
vape-tool/VapeTool-Webapp
606945445
Title: Allow anonymous users Question: username_0: Right now, some people might not want to even test the app if it require logging in before hand. Answers: username_1: What access should they have? Everything except "My profile"? It's only for alpha/beta? username_0: everything except "My profile" and all calculations done on backend. But there is no need to handle it because backend will throw 4xx when the JWToken is not present in a request. Status: Issue closed
YeOldeDM/lets-godot-roguelike
230133176
Title: Make Map generate a dungeon Question: username_0: Map gets `func generate_dungeon( Vector2 size )` This calls the DungeonGen generate method and stores its return for future use. From this data, it can paint the wall/floor tiles on the map. <!--- @huboard:{"order":0.9997000599900014,"milestone_order":0.9984013591843877} --> Answers: username_0: Setup a set of "Tile Families", maybe in `RPG`. Each family will be an array of tile indicies for variations of one tile type. Make one family for walls and one for floors. Use these families to draw random variation tiles. From this, the creator could easily create several families and have the dungeon randomly choose a set, for random dungeon skins. In fact, maybe we could call this "skinning the dungeon". Those could be Skin Families. Kind of morbid, but sounds neat :D Status: Issue closed
pytest-dev/pytest-bdd
258380920
Title: Async Step Definitions Question: username_0: is there away to use async step definitions with pytest-bdd ? For a step definition like: ` @when('i send cucumbers ') async def i_send_cucumbers(loop): pass ` I see the warning: ` pytest_bdd/scenario.py:137: RuntimeWarning: coroutine 'i_send_cucumbers' was never awaited ` Answers: username_1: what about async then? does it even make sense? username_1: will this work for you? async def send_cucumbers(): pass @when('i send cucumbers ') def i_send_cucumbers(loop): loop.run_until_complete(send_cucumbers) username_0: This works. However, in my case all step definitions are async (and can call multiple times into async functions). so this would mean a lot of calls to run_until_complete. As a work-around I have defined a decorator that schedules the function on the event loop. A generic solution would be to have the ability to easily wrap any step definition with a function. (similar to the before and after hooks). And then it would be great if support of async functions would be provided out of the box. Especially, after reading the discussion about implicit and explicit eventloops in the thread here: https://groups.google.com/forum/#!msg/python-tulip/yF9C-rFpiKk/tk5oA3GLHAAJ seems to be tending towards implicit loops for python code that runs on 3.5.3+ and/or 3.6+. So code that explicitly calls run_until_complete to run async functions will look more and more awkward. username_1: @username_0 this means pytest-bdd has to depend on yet another library asyncio? I don't really see how a gherkin scenario can by async. If the whole point that it has not to break i think you need to extend your testing suite with some kind of support for async functions (decorator is a good idea). I think semantically you can't really represent async process in step-by-step imperative Gherkin. Therefore it should be explicit. If you have to initiate few async messages - they should stand behind one When step that describes this exercise in a form that humans understand. Gherkin is not a programming language, it is the way to describe steps to humans which can't do async computation in their brains. What do you think, @username_2 ? username_2: well, the async question comes down to the pytest, not specifically to pytest-bdd. And for pytest's dependency injection, it's not realistic that it starts to support async fixture definitions anytime soon. Also for tests, while it sounds cool there seems to be a little win to have async fixtures, simply because fixtures should be fast enough, and then it should not matter much if you optimise dependency graph in a way that you run fixtures in parallel for some parts of the graph. It worth effort though to add the 'async clause' to the documentation mentining the workaround to cut the async point on the fixture definition by waiting for async function to finish username_1: @username_0 could you show your decorator implementation for when step that is awaiting? username_2: FYI: there's a helper apparently to minimize the efforts: https://pypi.python.org/pypi/pytest-asyncio username_0: Support on pytest side is fine. We are using aiohttp test-support and with pytest-asyncio 0.8.0 we can also the helpers from that package. However, these helpers work on scenario level. The step_definitions are one level below, and since the calling code here: https://github.com/pytest-dev/pytest-bdd/blob/master/pytest_bdd/scenario.py#L137 is not checking if step_func is async. i.e. it only supports synchronous step functions. In order to fix this, we currently need to wrap all steps with decorators like this: ``` def sync(func): @wraps(func) def synced_func(*args, **kwargs): loop = kwargs.get("loop") if not loop: raise Exception("Need loop fixture to make function sync") return loop.run_until_complete(func(*args, **kwargs)) return synced_func ``` username_0: @username_1 declaring a function async doesn't mean it is not imperative. Async functions are imperative like synchronous functions. However, with await / yield from you explicitly define points where the execution of other scheduled coroutines is allowed. In order to schedule the parallel execution of multiple asynchronous functions you would use helpers like asyncio.gather ( see https://docs.python.org/3/library/asyncio-task.html#example-parallel-execution-of-tasks ) The reason why we need to declare our step_definitions async is because we are using the aiohttp client inside to perform networking calls as part of our step definitions. Furthermore, the rest of the code base is completely async, hence we think it is only natural that also step_defintions are written with async. username_2: As i see from the pytest-asyncio code, it just awaits for every 'async' fixture, so there's no real parallelism possible between the fixtures And how do you mean pytest upports async fixtures in a 'proper' way? Can't really see that username_2: But i do see your point about the need of the wrapper everywhere - it sucks. Looks like the only way to avoid that is to depend on pytest-asyncio and use it's helpers directly in the pytest-bdd username_0: @username_2 even that is not possible since there is no hook that allows me to insert a custom decorator on the functions. even if pytest-bdd doesn't support async step definitions out of the box. A hook for wrapping a step definitions would help to reduce duplication. username_2: but I meant to change pytest-bdd itself to support that automatically username_2: are you up for making a PR which will add `pytest-asyncio` as a dependency and automatically use it to resolve async step definitions, if they are async? username_0: @username_2 In general, I am happy to make a PR. One question regarding the pytest-asyncio dependency. You mention it because we need to have a library to provide the loop fixture, right ? username_2: @username_0 yes, also to keep as much `core` async stuff as possible in a single plugin (pytest-asyncio) username_0: @username_2 i did a quick check on the current test-suite; how i can best write some this for this new type of possible step functions. do you have a pointer for me where i best can add tests ? Or should i create a complete new file ? username_2: You can put a new file here tests/steps/test_async.py copying the approach of tests/steps/test_unicode.py and replacing definitions with async ones: ``` @given async def ... @when async def ... @then async def ... ``` username_3: https://github.com/pytest-dev/pytest-bdd/pull/221 username_4: For what its worth, I took the approach with #221 and then added that hook implementation as a separate pytest plugin that I can install as necessary. username_5: Is your pytest plugin available anywhere? Do you plan on maintaining it? My gut feeling is that integrating the functionality into `pytest-bdd` would be the path of least maintenance burden, but that is up to the maintainers I suppose. username_5: Agreed, *but* the cool thing is that despite the name, `async/await` doesn't necessarily mean that the operations happen in arbitrary order. `await` just means that `async` operations suspend the current execution thread (not system thread, mind you) and execution is resumed once they are completed, in the same order as specified in the function. This way, the semantics of the operations are unchanged happen in a step-by-step imperative mode. Please let me know if you would like to have a conversation about this topic. I'm always happy to talk about `async/await` :) username_6: Behave 1.2.6 add some decorators to [Testing asyncio Frameworks ](https://behave.readthedocs.io/en/latest/new_and_noteworthy_v1.2.6.html#testing-asyncio-frameworks). That may be inspirational to implement something similar for pytest-bdd. username_7: Is there a guide somewhere to using pytest-bdd with pytest-asyncio ? I'm not sure how to make them play nicely together and actually execute my bdd tests. username_8: @username_2 Hi! Are you planning to close this [PR](https://github.com/pytest-dev/pytest-bdd/pull/349)? username_9: Just shooting this question as I am stuck with the same issue for handling async step definitions as pytest-asyncio cannot be used here. Is there some update or any way to handle the same as of now? Hope you will please help if some new changes/ways to handle are present. @username_2 username_10: Most of web development nowadays is moving to `async` frameworks (for instance [fastapi](https://fastapi.tiangolo.com/)). Testing this frameworks typically involves running async app and also use an async testclient. Support for async pytest-bdd step definitions would definitely help in further adoption in these kind of projects. username_2: @username_8 not for me to decide, I've requested a review from @youtux username_8: I forked the project and applied [PR](https://github.com/pytest-dev/pytest-bdd/pull/349). I was able to take advantage of asynchronous tests. But as time passes, I see that this is unnecessary. If you write integration tests, then nothing prevents you from using [requests](https://requests.readthedocs.io/en/master/) as a client. I will support the voiced [idea](https://github.com/pytest-dev/pytest-bdd/issues/223#issuecomment-331860204) that adding asynchrony is redundant and solves other problem. username_11: @username_8 but what about if your integration tests are part of your whole asyncio project with lots of async tests? Isn't it's better to just run `pytest` on the whole project and view all test results at once? I thought that's one of the key advantages of pytest-bdd. Otherwise, if you write your bdd tests fully isolated from other project test structure (async fixtures for example), what's the benefit then of using pytest-bdd instead of let's say behave? Currently, I actually do as you suggested, by separating bdd tests (using requests in them) and other tests. However again, for local development, for CI/CD I always need to run two commands instead of one. I mean it's not probably a "must-have" feature, but definitely "nice-to-have" for pytest-bdd. username_8: This is an argument. When you're used to writing in pytest and you don't want to dive into behave. I'll choose pytest-bdd. Yet here another problem is solved. The problem of infrastructure. username_12: Do we have a workaround getting async to work with pytest-bdd with aiohttp with `async with`? I was wrapping my async calls with `loop.run_until_complete' as suggested above but it just means that I have to define every fixture twice and then wrap them just to get it to work.
creeperyang/blog
272997992
Title: JavaScript问题集锦(二) Question: username_0: ### 1. 从 IIFE 说一说 Expression 和 Statement IIFE (Immediately Invoked Function Expression) 即立即调用/执行函数表达式。我们常看到(包括某些库中): ```js (function() {})() ``` 上面的即 IIFE 的一种写法,匿名函数会立即执行。下面是一些等价写法: ```js (function() {}()) !function() {}() ~function() {}() ``` 是不是觉得很熟悉,然后觉得没什么要注意的?那下面问个问题: ```js function(){}() ``` 它是 IIFE 吗?为什么?Console 中输入会发生什么?单独拎出来这样问是不是有些发懵? 上面的代码运行的话会报错,并且更进一步,单独执行 `function(){}` 也会报错: <img width="406" alt="2017-11-10 11 39 00" src="https://user-images.githubusercontent.com/8046480/32665782-0887e652-c5fb-11e7-87ae-45ebfc3ebf13.png"> 下面首先简要解释下原因: JS 应用是由(无语法错误的) statements 组成的。 当我们单独输入:`function (){}`时,解释器其实期待的是合法的 statement,即一个函数声明。但很抱歉,函数声明必须有 name,所以这里报错了。 同理,`function (){}()` 是一样的错误原因,因为当解释器首先看到关键字 function 时,它就认为要接收一个函数声明了,但我们并没有满足这个规则。 下面的图可以帮助理解: <img width="425" alt="2017-11-11 12 49 17" src="https://user-images.githubusercontent.com/8046480/32669031-d92e1d72-c604-11e7-80ce-f13d6838a101.png"> 接下来我们更深入一点,来全面了解下 JS 中的 Statements 和 Expression。 Answers: username_1: 讲得很清楚,手动点赞! 👍 username_0: ### 2. 关于 `String.fromCharCode` 的一点讨论 看[规范](http://www.ecma-international.org/ecma-262/5.1/#sec-15.5.3.2) ,我们可以知道: ```js String.fromCharCode ( [ char0 [ , char1 [ , … ] ] ] ) ``` 可以接收多个参数,并返回同样多个字符。 比如: ```js String.fromCharCode(50, 51, 52) // 234 ``` 但是,我们可以看一个反例(浏览器未严格遵循规范的): ```js String.fromCharCode(55297, 56375) // '𐐷' ``` 很有意思对不对?两个参数却只返回一个字符。 当然这是我查阅怎么从 `utf16` 编码读取字符串时发现的,具体原因也和编码有关,这里先记一下,之后给出详细解答。 username_0: ### 3. `<script>` 标签与`async` 和 `defer` 属性 浏览器在解析 HTML 时碰到普通的 `<script>` 标签,会暂停解析,下载并执行完脚本后,再重新开始解析。我想这个大多数人应该都了解,但`async` 和 `defer` 属性有什么区别,大家可能会有疑惑。 ![image](https://user-images.githubusercontent.com/8046480/53166728-179cb280-3611-11e9-9e8e-e381c3f8ea29.png) 如上图所示: 1、碰到**async**(`<script async src="app.js"></script>`)脚本,浏览器下载脚本的同时继续解析HTML,下载完成后,浏览器暂停解析HTML并执行脚本; 2、碰到**defer**(`<script defer src="app.js"></script>`)脚本,浏览器下载脚本的同时继续解析HTML,在HTML解析完成后按序执行脚本;更明确一点,脚本在`domInteractive`后执行。 3、**async**不保证各脚本的执行顺序而 **defer** 保证按序执行。 username_0: ### 4. `<input>` 和清除按钮 `X` 的显示问题 假设这样一个场景,有一个输入框 `<input>` 与一个清除按钮 `X` : - 输入框focus并且有文字时清除按钮显示;**(输入框blur时清除按钮隐藏)** - 点击清除按钮,输入框文字清空。 看起来实现没什么难的,我也不是要问清除按钮怎么用纯CSS来写,问题是,按钮的 `click` 和输入框的 `blur` 事件的发生顺序? **`blur` 先于 `click` 发生**,那么问题来了,blur时清除按钮就被移除,导致 click 事件没触发,也就导致文字不会被清除。 **临时方案1** :blur 的回调塞到 setTimeout(fn, 0) 里,让清除按钮的操作延后,是不是就可以让 click 触发? - 手机浏览器测试可以; - PC 上浏览器测试失败,当setTimeout 到十几毫秒时有概率成功。 不是个可靠方案,放弃。 **临时方案2** :不移除按钮DOM,改为设置透明度 0。这样 click 百分百可以触发,但需要考虑透明度 0 点击的时候当作无效点击。 不是完美方案。 **方案3** :不移除按钮DOM,改为设置透明度 0。这样 click 百分百可以触发,但需要考虑透明度 0 点击的时候当作无效点击。 按钮的 `onmousedown` 优先于输入框的 blur 发生,在 `onmousedown` 中 `preventDefault` 即可避免 blur 发生在 click 之前。 详情见 <https://stackoverflow.com/questions/17769005/onclick-and-onblur-ordering-issue>。 完美。 username_0: ### 5. 当出现垂直滚动条之后,为什么出现了水平滚动条?【css、layout、vw】 在测试UI的时候,发现个很有意思的问题:UI一直正常,直到浏览器宽度增加到某个值,出现了水平滚动条。 怀疑是某个元素布局问题,查找,并没找出原因。 用排除法,逐一删除相比线上不同的元素,删除第一个时,UI已经恢复正常,但是反复检查这个元素,并没有什么不对的地方;换第二个删除,UI也恢复正常.... 最终找到原因,删除一个元素时,垂直滚动条消失!滚动条的原因! 继续查找原因才知道:宽度单位 **`vw`** 是包括滚动条宽度的,即`100vw` 的宽度是 `document.documentElement.offsetWidth`,使用 rem 布局时需要额外注意。 ```css html { font-size: calc(100vw / 3.75); } ``` 解决办法: ```css ::-webkit-scrollbar { width: 0; height: 0; display: none; } ``` 隐藏滚动条即可。 参考:[Vertical Scrollbar leads to horizontal scrollbar](https://stackoverflow.com/questions/13569610/vertical-scrollbar-leads-to-horizontal-scrollbar)
shivammathur/setup-php
916991654
Title: add 7-zip to windows based php setups Question: username_0: **Describe alternatives** <!-- Please mention any alternative solutions you've considered. --> **Additional context** <!-- Add any other context or screenshots about the feature request here. --> **Are you willing to submit a PR?** <!-- We accept pull requests targeting the develop branch. --> no idea what needs to be done. I have no experience with this setup. Status: Issue closed Answers: username_1: 7-Zip is pre-installed on the GitHub Runners and is in PATH. Test workflow: https://github.com/username_1/test-setup-php/actions/runs/924351139/workflow
develsoftware/GMinerRelease
1179078879
Title: Bug Report - v2.90 performance increases not taking Question: username_0: In the comments it says: "improved performance for Ethash+TON dual mining" and, "improved TON performance" I have received no performance uplift. (Nor in the primary or secondary algo). - 2.90 and 2.85 are mining at precisely the same speed. -I tried the update with both my LHR and non-LHR rigs, not even a small speed increase is Is there something I must do to [activate] the new improvement to Eth+TON? Regards, P.S What exactly was improved? Or was it just an improvement to the default quick-start settings? Answers: username_1: I tested with RTX3090 and RTX3080 and there was no change. Are there any special settings? username_0: I don't think it has got worse, it is just exactly the same though. They post it as propaganda (an advertising gimmick). Other miners list performance uplifts so they do it too. Just ends up pissing people off who will leave to try other Miner's. (Either in protest about being lied to, annoyance at the lack of context, or because they think another Miner app will give the uplift they were suddenly expecting). username_0: I have *might* have noticed that LHR TON performance is a little more consistent. (I.E the speed doesn't jump around as much). Seems stable at : **Card** . . . . . . . **Ether** . . . . . . . **Setting** . . . . . . . .**TON** 3070 Ti . . . . 63.03 . . . . Dual_Intensity=20 . . . . 1.73 3080 Ti . . . . 86.42 . . . . Dual_Intensity=22 . . . . 2.62 username_2: @username_0 What memory does your 3070ti have? What OC? Mine doesn't go over 59 Mhs + 1.6 Ghz on gminer 2.89. Need to try 2.90
mozilla/areweslimyet
115136611
Title: Properly normalize process names Question: username_0: Currently we just drop the pid from a process name, this works fine for standard e10s where there is the 'Main' process and the 'Web Content' process. When enabling multiple content processes we need to do something more elaborate, the idea being: Run N: Web Content (98754) => WebContent 1 Web Content (87689) => WebContent 2 Run N+1: Web Content (56789) => WebContent 1 Web Content (45678) => WebContent 2 Answers: username_0: This landed on the e10s branch in f39f6441b598e9af49470c294e1401f65577b5ae Status: Issue closed
edenspiekermann/iframify
147834049
Title: Doesn't support attributes on html or body elements Question: username_0: getIframeContentForNode generates the html and body elements with no attributes. It might be helpful to include add any existing attributes from the global html and body elements to the generated ones. Answers: username_1: Very good idea. username_0: And/or allowing them to be explicitly defined via some configuration. Status: Issue closed username_1: Done with: https://github.com/edenspiekermann/iframify/commit/0957408b2c21aa6c6b51088df4c6a58a1b24d231.
unisonweb/unison
935169537
Title: Case of wrong indentation of |> in printed code Question: username_0: This code ``` fixDigit i x = cases Sudoku digits -> digits |> List.mapIndexed (j -> cases (Digit s) -> if i == j then Digit (Set.fromList [x]) else Digit s ) |> Sudoku ``` gets printed as ``` fixDigit : Nat -> Nat -> Sudoku -> Sudoku fixDigit i x = cases Sudoku digits -> use Nat == digits |> mapIndexed (j c3sjrdoffv1 -> (match c3sjrdoffv1 with Digit s -> if i == j then Digit (Set.fromList [x]) else Digit s)) |> Sudoku ``` where both `|>` operators report error. Answers: username_1: As someone who uses `|>` quite a lot I've run into this as well username_2: related #1035 username_3: Fixed by #2399 @runarorama fyi if you say “fixes #ticket1 and #ticket2” GitHub will only close #ticket1. The parser doesn’t seem to be very smart. 😀 You can do “fixes #ticket1, fixes #ticket2” instead. Status: Issue closed
MicrosoftDocs/microsoft-365-docs
505898739
Title: Background color and disclaimer changes? Question: username_0: I noticed that the background color is not applied to the background of the logo any longer? =&gt; in the above example the black background behind the Office 365 logo. This also changed for existing branding. Also, the disclaimer has now an added link to the Microsoft privacy policy plus an information, that the mail is protected with Office 365 Mail Encryption and a link to the Office support page. Is this an intended change (I just missed) and if so, will it be possible to change the link to an organisational privacy policy (as this service is usually used in the contact of an organisation where Microsoft is the processor) and to remove the information and link regarding OME? Will the change that the background color does not apply to the logo background stay or will the logo background be adjustable separately? (The latter would be great.) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: b04de3d3-6a94-29f8-3a3d-68ed9422e058 * Version Independent ID: e50fc407-2c78-a3cc-71cd-7978ca340189 * Content: [Add your organization brand to your encrypted messages](https://docs.microsoft.com/en-us/microsoft-365/compliance/add-your-organization-brand-to-encrypted-messages#feedback) * Content Source: [microsoft-365/compliance/add-your-organization-brand-to-encrypted-messages.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/compliance/add-your-organization-brand-to-encrypted-messages.md) * Service: **o365-seccomp** * GitHub Login: @KCCross * Microsoft Alias: **krowley** Answers: username_1: @username_0 - Thank you for submitting feedback. I will get this issue over to the Microsoft 365 writing team for investigation. Thank you for reporting and making the docs better. Much appreciated. I made a note to request the team to update this when the work is complete. username_1: @KCCross - Can you please share your insights on this issue? Thank you. cc: @kenwith username_2: there's a bug with the background logo not showing, and we that we are fixing that. For the disclaimer, it's a new layout change, and you will be able to to point to the organization's privacy. We're just setting up the new layout, and there'll be more details on setting the privacy later. username_1: @username_2 - We would like to follow up. Do you have any updates if the bug has been fixed? Thank you. username_1: @username_0 - From our understanding, the issue you raised has been answered by KCCross and username_2 so we will close this issue. Thank you for your contribution to make the docs better! Much appreciated! Status: Issue closed username_3: @Samschan-ms when will the disclaimer option be added to set a custom privacy policy?
rook/rook
298534100
Title: Helm chart wait for TPRs\CRDs Question: username_0: **Feature Request** A very simple quality-of-life feature for the Helm chart installation to wait until the TPRs\CRDs are available before finishing. This helps in cases such as using a [helmfile](https://github.com/roboll/helmfile) to deploy the operator, and a cluster. I saw the idea in [prometheus-operator](https://github.com/coreos/prometheus-operator/blob/release-0.17/helm/prometheus-operator/templates/get-crd-job.yaml) and it looks very simple to adapt for rook-operator and I'd be happy to contribute it. Answers: username_1: This sounds like a very useful idea @username_0, I like it! I'd be happy to see a pull request with this enhancement :) username_0: Opened a PR, hope it gets in to 0.7 :) Also noticed in the docs you're looking for contributions for charts for the resources themselves, were you thinking a chart per kind, or a generic one that can configure multiple clusters\pools\etc... Status: Issue closed
jasonscott5391/udacity-android-developer-nanodegree
327956150
Title: Implement Detail Activity Question: username_0: * DetailActivity ** Title ** Poster ** Overview ** Rating ** Release Date * Insert movies in batch with replace on conflict. ** Updates rating. ** Faster to overwrite than read. * Add initialized flag. ** Sync on start (onCreate) and set flag to true. * Get Movie by ID as MutableLiveData. ** AsyncTask for fetching from database and post value.<issue_closed> Status: Issue closed
appium/appium
260922144
Title: Doesn't select checkbox on webview, whereas it executes fine without any error Question: username_0: ## The problem Can not select the checkbox on webview, whereas it executes fine and does not show any errors, and actual condition of checkbox remains the same as it was (unchecked). ## Environment * Appium version (or git revision) that exhibits the issue: 1.7.1 * Desktop OS/version used to run Appium: Windows * Mobile platform/version under test: Android version 7.0 * Real device or emulator/simulator: emulator for tablet Nexus 9 * Environment Robot Framework + Python + Appium ## Details It works fine with input text into elements for such paths: //textarea[@id="132"]. Whereas, when I try to select a checkbox for the paths like //input[@id="200"], execution passes, but checkbox remains unselected. For example, webview HTML for average checkbox looks like that ```html <input type="radio" name="218" onchange="checkBoxChanged(this.name)" value="1" id="40" data-type-ios="5"> ``` Chrome driver version for webview is 2.23 ## Link to Appium logs https://gist.github.com/username_0/953ff3fd9fb854af3e6c737e99c2ff07 Answers: username_1: Try to perform tap by coordinates. If it still does not work you may simulate triggering the corresponding event on the checkbox using executeScript. username_0: Tried to simulate by using JavaScript and jquery with following scripts: ```document.getElementById("200").checked = true``` or ```jquery $("#200").attr('checked', 'checked')``` , but no luck. Any other ideas? Thanks in advance. username_1: ```js $.fn.changeVal = function (v) { return $(this).val(v).trigger("change"); } $("#my-input").changeVal("Tyrannosaurus Rex"); ``` username_2: @username_0 I was able to select check boxes by doing: `checkboxInput.SendKeys(Keys.Space);` Worth giving it a try! Status: Issue closed username_0: @username_2 @username_1 Thank you guys, @username_2 solution helped me.
CLAMP-IT/moodle-blocks_filtered_course_list
216780608
Title: Category label templates Question: username_0: We could use these to allow administrators to display category ancestors in a label, for instance. Answers: username_0: Some notes. Options might be: NAME, PARENT, and PATH. I assume we want this for category rubric titles and not for the category links that show up in a generic list. Or maybe we need both. Status: Issue closed
wekan/wekan
860488555
Title: Feature request label ordering in cards and label list Question: username_0: in the current version the labels orderd in the card as they created in the label list the first added label in the card is left and last added label is right. 1. it was fine when users can change the order on cards a Card has Label-A | Label-B | Label-C user move C left before A. result: Label-C | Label-A | Label-B 2. and in the label list -------------- Label-A Label-B Label-C user move C up before A. result: -------------- Label-C Label-A Label-B so users can move often used labels to top, 3. Also labels added to a card should stay at the position. 1. added Label-C at position 1 2. added Label-B at position 2 3. added Label-A at position 3 result should be: Label-C | Label-B | Label-A this brings a very flexible and nice looking view to labels on cards, a sample, when we have three types of labels like rooms, groups, actions and we want that rooms are left, groups are middle and actions are right we create this labellist -------------- room1 group1 action1 then we create a new card with label room1, group1, action1 it looks nice room1, group1, action1 later we add a group2 to the labellist -------------- room1 group1 action1 group2 and create a new card with label room1, group2, action1 in the current version the group is right but should be in the middle. room1, action1, group2 Answers: username_1: See also #3424
jlippold/tweakCompatible
346144310
Title: `App Admin` working on iOS 11.3.1 Question: username_0: ``` { "packageId": "com.unlimapps.uaupdatetools", "action": "working", "userInfo": { "arch32": false, "packageId": "com.unlimapps.uaupdatetools", "deviceId": "iPhone10,6", "url": "http://cydia.saurik.com/package/com.unlimapps.uaupdatetools/", "iOSVersion": "11.3.1", "packageVersionIndexed": true, "packageName": "App Admin", "category": "System", "repository": "beta.unlimapps.com", "name": "App Admin", "installed": "1.0r-101", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 84% with 11 working reports.", "id": "com.unlimapps.uaupdatetools", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.0.7", "shortDescription": "Downgrade iOS apps to any previous version! No AppSync needed!!!", "latest": "1.0r-101", "author": "UnlimApps Inc.", "packageStatus": "Working" }, "base64": "<KEY>", "chosenStatus": "working", "notes": "" } ```
GentenStudios/Phoenix
603951740
Title: Greedy meshing Question: username_0: ## Story Add greedy meshing. Greedy meshing is a method of reducing the total number of polygons so the terrain is less intense for the computer to load. ## MVP - [ ] Implement greedy meshing ## Stretch - [ ] Add multi chunk greedy meshing (if possible)
hubblecommerce/hubble-frontend-pwa
748697334
Title: [FEATURE] Refactored ViewProduct.vue Question: username_0: **Is your feature request related to a problem? Please describe.** As declared in the hubble coding guidelines, a Vue component should import components dynamically. Furthermore Add-to-wishlist-button was not shown in desktop-view. **Describe the solution you'd like** Added dynamic imports of components. Added wishlist-button to desktop-view and adjusted position.<issue_closed> Status: Issue closed
marktext/marktext
1128742838
Title: Invalid key bindings on Muya input helper Question: username_0: ### Description Muya's `@` input helper key bindings aren't synced with the user key bindings in MarkText. In addition, the current default key binding are outdated and maybe invalid, depending on the OS because we use different key bindings. ### Steps to reproduce Open input helper with `@` on an empty line. **Expected behavior:** Key bindings should be the same as configured in MarkText. **Actual behavior:** Outdated default key bindings are shown. **Link to an example: [optional]** ![](https://user-images.githubusercontent.com/42169660/152955461-c5474be4-70a5-4228-b794-d747bd35d458.png) ### Versions - MarkText version: 0.16.3, 0.17.0-rc.1 - Operating system: all /cc @username_1 Any suggestions? Answers: username_1: UI components support passing some configuration parameters, and we can pass the required KeyBinding information to UI components later.
thehapyone/FastestRplidar
699057391
Title: Segmentation fault (core dumped) Question: username_0: When running a script which starts the motor, sleeps for 10 seconds and then stops the motor I get the following error: Segmentation fault (core dumped). Answers: username_0: Update: solved! You can't start the motor after you stopped it within the same instance. You have to re-initiliaze the lidar to start it again. Status: Issue closed
portainer/portainer
226925135
Title: add support for additonal "runtime" settings when creating new containers Question: username_0: When creating a new container, there are times that additional runtime options are needed which are not natively supported in Portainer. Rather then clutter the UI with options, we should support the ability to add these manually (enter the option manually and its corresponding value). As an example, --cap-add, --cap-drop, --cpu-count, --dns, --ip. I think a simple table similar to what it used to set environment variables would work, but have this placed in the "runtime" area. Answers: username_1: This will be quite tricky to implement as it requires to parse the options and to find the equivalent API setting for the parsed option. I think we already had this discussion in another issue but I was not able to find it. I'll flag this as an evolution. username_2: Hi It's #597 username_1: Awesome, thanks @username_2 Will close this one and link it in #597 Status: Issue closed username_0: OK, well how about to start with, we just add a few (dropdown select box). This way we extend portainer to support these extra runtimes, but we dont have to parse to find the API... rather, we support 4-5 now. initially, i want the --cap-add option to enable support for NetSil. Sent from my iPad
swarmcity/sc-boardwalk
212705453
Title: As a user I consult my balance to see how many tokens I have Question: username_0: _From @username_0 on November 21, 2016 13:27_ _Copied from original issue: swarmcity/ac-terminal#15_ Answers: username_0: _From @kingflurkel on November 30, 2016 14:53_ ac-balance username_0: _From @kingflurkel on November 30, 2016 14:54_ Which Oracle / exchange will I use to do the ref currencies? Status: Issue closed
BuildACell/txtlsim-python
359589562
Title: Implement buffer component Question: username_0: The current version of the package does not implement a `buffer` component, which is where we will probably want to keep things like the NTP, A/GTP, amino acid, and salt concentrations. I didn't need this for the initial implementation because I used very simple core mechanisms for transcription and translation that don't use energy resources. Someone should implement a `buffer` component that sets up the energy species and also includes a mechanism for degradation of ATP (which for now can be a `noop`, I guess).
cu-mkp/m-k-manuscript-data
367342054
Title: SP16 "Azure enamels" - folder contains text and notes file Question: username_0: Possibly the cause of errors in "lizard logs". To rectify, notes file would need to be removed. The actual text of the annotation is the doc "AnnotationSpring2016_ChangClemens_AzureEnamels_11r59r61v93v" (see text folder https://drive.google.com/open?id=0BwJi-u8sfkVDMHN2LWtpNnE5cGc) Wait to re-organize folder until consensus reached with M&K team about workflow Answers: username_1: This is the annotation text: https://docs.google.com/document/d/1YNRGZTID5NGXX3vT5NqRzA7UxT1DW03OSXqLl6cc28U/edit The File of notes needs to stay in the folder until annotation editing is complete. username_0: Note added to metadata table about additional files. Metadata already updated Status: Issue closed username_0: ann_033_sp_16 "Azure Enamels," fol. 11r, 59r, 61v, 93v (Chang, Clemens) Possibly the cause of errors in "lizard logs". To rectify, notes file would need to be removed. The actual text of the annotation is the doc "AnnotationSpring2016_ChangClemens_AzureEnamels_11r59r61v93v" (see text folder https://drive.google.com/open?id=0BwJi-u8sfkVDMHN2LWtpNnE5cGc) Wait to re-organize folder until consensus reached with M&K team about workflow username_0: Notes file is owned by Le Pouesard username_1: This is a file that is just for the revision of the annotation, so does it matter that it is owned by Emma? I would say no, it doesn't. What do you say? username_1: this essay was abandoned and usable content moved to other essays Status: Issue closed
RenderHeads/UnityPlugin-AVProVideo
682520195
Title: IOS platform video stuck Question: username_0: **Describe the bug** Hello there! I found a problem when using AVProVideo: When playing a video on the IOS platform, if there is a call at this time and the call is closed or the Bluetooth headset is connected, the video will be stuck and cannot be played. We tried to set the Media Player and checked Pause Media On App Pause and Play Media On App Unpause, but it still didn’t take effect; after checking the code, we found that in MediaPlayer.cs In the OnApplicationPause function, m_Control.Pause() will only be called under non-Iphone platforms; after the restrictions on this platform are removed, the above problems will not occur in the ios package that is printed. I would like to ask: Why is it restricted to call this function under the IOS platform? Will there be other problems if I remove this restriction? Or is there any other way to solve the problems encountered above **Your Setup (please complete the following information):** - Unity version: 2018.4.19f1 - AVPro Video version: 1.11.0 - Operating system version: IOS - Device model: - Video specs (resolution, frame-rate, codec, file size): **To Reproduce** 1. 2. 3. **Logs** If applicable, add error logs to help explain your problem. **Screenshots** If applicable, add screenshots to help explain your problem. **Videos** If applicable, add a copy of your video or the URL ### Please DO NOT LINK / ATTACH YOUR PROJECT FILES HERE Instead email the link to us <EMAIL> Answers: username_1: Hello! There is an option under the platform specific settings for iOS: "Resume playback on audio session route change". You need to enable this in order for the video to automatically resume playing when headphones are connected or disconnected. username_0: Thank you! I also encountered another problem: MediaPlayer is embedded in a scene, but suddenly when returning from another scene, the video cannot be played, and the black screen is displayed. This problem does not work on Android, but only encountered on ios. ------------------&nbsp;原始邮件&nbsp;------------------
spree/spree
60707783
Title: Impossible to implement new European tax law for digital goods (as of 1/1/2015) Question: username_0: This is currently almost, but not quite possible with Spree's zone-based taxation system. We have to output prices including VAT, and depending on which type of good (digital vs physical), different VAT rules apply: - Physical goods: Regardless of destination, all prices carry the VAT of the **origin** country. - Digital goods: If sold to consumers, all prices carry the VAT rate of the **destination** country. ### Example: A cart containing a shirt and a music file to be downloaded ordered from Denmark in a web store based in Germany has to apply two different VATs: - 19% German VAT on the shirt, as it is a physical good - 25% Danish VAT on the downloaded music file. ### Difficulties with Spree's current taxation model In order to achieve correct taxation, we created tax rates and corresponding zones for each individual EU member state (for digital goods). We also created a tax rate and zone for all EU countries with the German VAT (as our client is based in Germany). This works, but only as long as we display prices excluding VAT, which is not allowed in the EU. If we switch to prices including tax, Spree has difficulty calculating the correct tax rate: Because we can only set one "default tax zone" (EU for physical goods), with digital goods the net price changes depending on which country you order from, which can not possibly be correct (a download with a gross price of 10 EUR would have a net price of 8.10 when ordered from Germany and of 7.50 when ordered from Denmark. We need Spree to calculate the net price always from the zone in which the retailer is based. To achieve this, we propose the following: The default tax zone should be a `belongs_to` relation on the `TaxCategory` model. This way, Spree should be able to correctly assess which `TaxRate` to use to calculate the net price for different kinds of products. This only makes sense for VATs which have to be displayed as included in the price. Thus, if it is not set, that would mean the tax is normal ('Sales') tax which is applied additionally. As you would expect, this is a big change deep in Spree's taxation code. This is the only way we could think of to solve this difficult problem so far, we'd love to have your feedback on this issue. We started [work on a branch](https://github.com/magiclabs/spree/tree/better-tax-zones), and will publish a (probably breaking) PR shortly to illustrate our proposal and work this out collaboratively. Answers: username_1: I :heart: Taxes :-/ username_2: @username_1 especially in the EU! username_1: I'm actually in favor of the EU approach over US our sales tax system is a mess, and makes VAT look easy peasy. username_2: Ah, tax sucks everywhere ;) The problem is, that we currently focus on some other features in our next sprint. So, sadly this has to wait until the week after next. If someone has some great ideas on how to refactor this, please feel free to put in some advices in here, so we can respect them. BTW: Magento (the shop system we are replacing right now) DOES NOT make it right either. It is actually a big mess here in Europe, that nearly no shopping system can handle this messy law right correctly. So, if we (the spree community) get this right, this is a real advantage over lots of shopping systems! username_3: @username_0 Try our [Taxamo](http://www.taxamo.com/how-it-works/) solution. Repo [here](https://github.com/taxamo). @username_2 we have just released a Magento module to handle EU VAT compliance. Available from Magento Connect [here](http://www.magentocommerce.com/magento-connect/taxamo-for-new-eu-vat-calculation-and-moss-reporting-1.html). username_2: @username_3 Wow, this is a whole new level of marketing! Scanning GitHub issues? Thanks for mentioning your **product**, but I think you are wrong here.... username_0: I've been spending quite some time over the last couple of days in looking at the tax calculation code in Spree, and I've found some issues. I now think that the solution we outlined above, that of having different ```default_tax_zones``` for item categories, would be a hacky way of dealing with Spree's current handling of taxes that are "included in price". ### Current situation The fundamental issue is that the ```pre_tax_amount``` - which is the shop's base price for an item - is calculated dynamically every time an order is updated, using whatever ```Spree::TaxRate``` is set as ```included_in_price```. Included tax rates then generate an adjustment which does not change the (gross) price, unless the order's zone is outside the default zone, in which case the following happens: - The tax rate from the default zone is [deleted from the array of applicable rates](https://github.com/spree/spree/blob/master/core/app/models/spree/tax_rate.rb#L51-L61) - The VAT from another tax zone is applied, does ***not*** change the gross price, ***but*** [changes the ```pre_tax_amount```](https://github.com/spree/spree/blob/master/core/app/models/spree/tax_rate.rb#L67-L79) - the ***net*** price of the item. This can only be correct if the two VATs (destination VAT and default VAT) have the same amount. ### Desired situation VATs, in real life, do not change how much a merchant charges. VATs change what prices look like. More technically: ```TaxRate.adjust``` should change the ```price``` of a line item depending on which ```TaxRate```s are included in price for that ```Zone```. The price of the line item should then be calculated as the ```pre_tax_amount``` plus whatever ```included_in_price``` rates apply. This should fulfil the following requirements: An order with one line item (a download) priced at 100 € to Germany should cost 100€ if sent to Germany. At 19% German VAT, that means a price of 100 Euros, a ```pre_tax_amount``` of 84,03 Euros, and 15,97 Euros VAT as an included adjustment. If the same order is ordered from the US, the price of the CD on the invoice should simply be 84,03 Euros, with no adjustments (as no taxes apply for that transaction). Currently, Spree handles this as 100 Euros ***minus*** a 15,97 VAT refund adjustment. No need. If that same order would be now sent to a country with a different VAT of, say, 25%, we should have again: a price of ```84,03 * 1,25 = 105,04``` with an included adjustment of 21,01 Euros. The ```pre_tax_amount``` for this line item does not, and should not, change. ### Configuring Spree for MOSS with this setup If things were implemented like this, the MOSS mess could be accommodated for with the following set-up: You have zones for all EU countries, including the shop's residence country. The shop's residence zone is the default zone. Now, for different tax categories to be taxed with different VATs you simply create TaxRates that apply to digital products, are ```included_in_price```, and have an ```amount``` reflecting VAT rates in this country. For all normal products, you create TaxRates with the German VAT and zones inside the EU, but outside Germany. ### Call for community participation While we believe that this is correct, we do not know what the requirements of other shops are. Can you reach out to people and find out whether our reading of what VAT is would fulfill their needs? ### Implementation details Currently, ```TaxRate.potential_rates_for_zone``` [always includes](https://github.com/spree/spree/blob/master/core/app/models/spree/tax_rate.rb#L51-L61) the default_tax_zone rate for that product category. I believe it's easier if we just do this: ``` def self.potential_rates_for_zone(zone) self.where(zone_id: Spree::Zone.potential_matching_zones(zone).pluck(:id)) end ``` The default_zone should only be used for 1. showing prices in the front-end to users without an address (those prices have to include VAT, so they'll have to be calculated dynamically) 2. (optional) If in the backend prices have to be ***entered*** including VAT, it should be used to calculate the ```pre_tax_amount``` on line items. Once, when they're created. Spree's input label translations hint at [that being the default anyway](https://github.com/spree-contrib/spree_i18n/blob/master/config/locales/de.yml#L799). I did look at the threads https://github.com/spree/spree/issues/4397 and https://github.com/spree/spree/issues/4327, as well as at https://github.com/spree/spree/issues/4318#issuecomment-34723428 Stuff that will need changes: https://github.com/spree/spree/blob/master/core/app/models/spree/adjustable/adjustments_updater.rb#L43 username_0: We will now start to implement something along the lines of the stuff I mentioned above. The idea is the following: - Prices, discounts, shipping rates, etc. will *always* be entered without VAT (just like in the US). - Variants and Line items both will get a method that presents the price including VAT for the default zone or the order's user zone. - In the course of this, we'll be have to be changing some tests, too. The difference between VAT and sales tax is really an issue of displaying prices, discounts and shipping rates to users. In terms of how calculation is done, it is actually very similar to sales tax - especially for the merchant herself. Status: Issue closed
kubesphere/kubekey
678276694
Title: redis-ha can't installed Question: username_0: ## issues like this: ``` Waiting for service redis-ha-announce-0 to be ready (1) ... Waiting for service redis-ha-announce-0 to be ready (2) ... Waiting for service redis-ha-announce-0 to be ready (3) ... Waiting for service redis-ha-announce-0 to be ready (4) ... Waiting for service redis-ha-announce-0 to be ready (5) ... Waiting for service redis-ha-announce-0 to be ready (6) ... Waiting for service redis-ha-announce-0 to be ready (7) ... Waiting for service redis-ha-announce-0 to be ready (8) ... Waiting for service redis-ha-announce-0 to be ready (9) ... Waiting for service redis-ha-announce-0 to be ready (10) ... Could not resolve the announce ip for redis-ha-announce-0 Error from server (BadRequest): container "haproxy" in pod "redis-ha-haproxy-ffb8d889d-mnsv8" is waiting to start: PodInitializing ``` Answers: username_0: I have done a little research, maybe coredns issue? username_1: Are all network and dns components of the cluster working properly? such as calico, coredns, nodelocaldns username_0: issue closed since second install is successful. Status: Issue closed
appirio-tech/connect-app
556670929
Title: Button clickable - nothing happens Question: username_0: Expected behavior Button should be clickable or something should happen Actual behavior Nothing is happening Steps to reproduce the problem Login to the app as Manager Click on new project -> Observe the button with lines in the screen shot Screenshot/screencast i ![Screen Shot 2020-01-29 at 12 14 16 PM](https://user-images.githubusercontent.com/57220292/73333887-2b28b280-4291-11ea-9889-0c0bdeb474b6.png) -- Environment OS: Macbook Air 10.15.2 Browser (w/version): Chrome Version 79.0.3945.130 (Official Build) (64-bit) User role (client, copilot or manager): <EMAIL> Account used: <EMAIL> Answers: username_0: Label - cant edit BugHung_Janrelease P3 Functional username_0: @bug-hunt-helper add label: BugHunt_JanRelease, Functional, P3 Status: Issue closed username_1: Out of scope and as per design. @username_0 that button is to switch between list and grid views, and its currently selected.
jblindsay/whitebox-tools
1081493761
Title: BurnStreamsAtRoads - `--roads` does not use `--wd` Question: username_0: It appears that the BurnStreamsAtRoads tool does not use `--wd` / `working_directory` for `--roads`. This command errors: ``` $ whitebox_tools.exe --run=BurnStreamsAtRoads --dem=dem.tif --streams=streams.shp --roads=streets.shp --width=270 --output=burn.tif -v --wd=E:/temp ********************************* * Welcome to BurnStreamsAtRoads * ********************************* Reading streams and roads data... thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "The system cannot find the file specified." }', src\vector\shapefile\mod.rs:238:56 stack backtrace: ``` Whereas this works as expected: ``` whitebox_tools --run=BurnStreamsAtRoads --dem=dem.tif --streams=streams.shp --roads=E:/temp/streets.shp --width=270 --output=burn.tif --wd=E:/temp ``` https://github.com/username_1/whitebox-tools/blob/6c41158d8464636ae6c4860a121308bf4f429e72/whitebox-tools-app/src/tools/hydro_analysis/burn_streams_at_roads.rs#L238-L258 I think following lines are missing: ```rust if !roads_file.contains(&sep) && !roads_file.contains("/") { roads_file= format!("{}{}", working_directory, roads_file); } ``` Answers: username_1: I've just added code to append the working directory to the roads file if unspecified. This fix should resolve your issue and will be available in the upcoming v 2.1 release. Status: Issue closed
JiiHu/Tenttiarkisto
121336592
Title: Yleistä Question: username_0: Kun ikkuna on tarpeeksi pieni tai näyttö on kännykän, niin yläpalkin vasemmat tekstit ei ole linjassa sisällön vasemman reunan kanssa. Isolla näytöllä alapalkin Hosting provided by oikea reuna ei ole linjassa loginin ja add examin oikean reunan kanssa.
jupyterhub/jupyterhub
957591136
Title: Consider relaxing referer check on hub API requests Question: username_0: ### Proposed change Currently several hub API endpoints are protected by a referer check: https://github.com/jupyterhub/jupyterhub/blob/c00c3fa28703669b932eb84549654238ff8995dc/jupyterhub/apihandlers/base.py#L53-L61 Which means API requests must originate from `http(s)://<hostname>/hub/`. The RBAC work exposes more granular access to functions which were previously for admins only, which means a single-user extension might want to make hub API calls. For example, in https://discourse.jupyter.org/t/plans-on-bringing-rtc-to-jupyterhub/9813/7 I made a user an admin of a group. I tried to get the list of users in the group using `GET /hub/api/groups/{group-name}` from a JupyterLab extension but this is blocked as the referer URL path starts with `/user/{username}` instead of `/hub`. ### Alternative options Do nothing ### Who would use this feature? Singleuser apps that want to take advantage of some functions that were previously limited to hub admins. It would potentially also be useful for someone building an external JupyterHub interface. ### (Optional): Suggest a solution Begin by describe the security risks this is intended to protect against. Then work out if those protections can be implemented in some other way. Answers: username_1: The fundamental issue in our security model is that auth is implemented in the single-user server where users themselves have arbitrary execution permissions. This is the reason JupyterHub is only appropriate for 'semi trusted' user groups, and never for fully untrusted users without lots of extra security out of scope for JupyterHub itself (e.g. by implementing an auth proxy in front of the whole application). This model makes it fairly straightforward for a user to compromise their own server and execute XSS attacks within the JupyterHub deployment. For instance, if I compromised my own server to serve an arbitrary HTML page without auth, without this referer check, I could share a link that would immediately grant me full admin permissions across the Hub if I could convince an admin to visit that link one time. Because JupyterHub is typically served from a single domain, browser XSS protections don't apply, because the Hub and singleuser server are the 'same site'. So there's no real distinction between a 'good' request from your own cool JupyterLab extension and my malicious self-served page (a jupyter server extension is an easy way to do this). I think the only way to do this kind of thing is to shift the authentication responsibility to the proxy itself, which dramatically changes the scope of a proxy's role in JupyterHub. username_2: so this should work: ``` await fetch('/hub/api/groups/{group-name}', {referrer: '/hub/'}) ```
ComparativeGenomicsToolkit/cactus
745543799
Title: cactus stopped an RuntimeError with cactus_bar Question: username_0: Hi: I run the cactus using 18 ~300M genome, after 3 days, it stopped. I found lots of RuntimeError in the log file, almost the 'cactus_bar' error: raise RuntimeError("Command {} exited {}: {}".format(call, process.returncode, out)) RuntimeError: Command ['cactus_bar', '--logLevel', 'INFO', '--cactusDisk', '<st_kv_database_conf type="kyoto_tycoon">\n\t\t\t<kyoto_tycoon database_dir="fakepath" host="192.168.1.143" port="8032" />\n\t\t</st_kv_database_conf>\n\t', '--maximumLength', '1000000.0', '--spanningTrees', '5', '--gapGamma', '0.0', '--matchGamma', '0.2', '--splitMatrixBiggerThanThis', '3000', '--anchorMatrixBiggerThanThis', '500', '--repeatMaskMatrixBiggerThanThis', '500', '--diagonalExpansion', '20', '--constraintDiagonalTrim', '14', '--minimumDegree', '2', '--minimumIngroupDegree', '1', '--minimumOutgroupDegree', '0', '--pruneOutStubAlignments', '--alignAmbiguityCharacters', '--useProgressiveMerging', '--largeEndSize', '5000', '--minimumSizeToRescue', '100', '--minimumCoverageToRescue', '0.5', '--minimumNumberOfSpecies', '1'] exited 128: stdout= [2020-11-17T17:27:09+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host localhost.localdomain Here is my command: nohup cactus --workDir /data/share_data/genome/whole_genome_align/1.cactus_result/18samples/workdir --stats /data/share_data/genome/whole_genome_align/1.cactus_result/18samples/jobstore /data/share_data/genome/whole_genome_align/18samples/18sample_for_cactus.txt /data/share_data/genome/whole_genome_align/1.cactus_result/18samples/18samples.hal& Is the error due to CPU or memory? Because I run other software at the same time. Now I add the --restart command, and cactus is running now. I don't know if it will stop again. Thanks. Answers: username_1: There's not enough information in the log snippet you posted to tell what's going on. Running out of memory is definitely a possibility, though. username_0: Thanks, I add the --restart command, and here the new error: [2020-11-19T09:56:19+0800] [MainThread] [W] [toil.leader] Log from job kind-CactusHalGeneratorUpWrapper/instance-l9x_b6om follows: =========> [2020-11-19T09:56:18+0800] [MainThread] [I] [toil.worker] ---TOIL WORKER OUTPUT LOG--- [2020-11-19T09:56:18+0800] [MainThread] [I] [toil] Running Toil version 4.2.0-3aa1da130141039cb357efe36d7df9b9f6ae9b5b on host localhost.localdomain. Traceback (most recent call last): File "/home/genome/tools/cactus-bin-v1.2.3/venv/lib/python3.6/site-packages/toil/worker.py", line 364, in workerScript with fileStore.open(job): File "/usr/lib64/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/home/genome/tools/cactus-bin-v1.2.3/venv/lib/python3.6/site-packages/toil/fileStores/nonCachingFileStore.py", line 59, in open self._removeDeadJobs(self.workDir) File "/home/genome/tools/cactus-bin-v1.2.3/venv/lib/python3.6/site-packages/toil/fileStores/nonCachingFileStore.py", line 186, in _removeDeadJobs if not process_name_exists(nodeInfo, jobState['jobProcessName']): File "/home/genome/tools/cactus-bin-v1.2.3/venv/lib/python3.6/site-packages/toil/lib/threading.py", line 313, in process_name_exists nameFD = os.open(nameFileName, os.O_RDONLY) FileNotFoundError: [Errno 2] No such file or directory: '/data/share_data/genome/whole_genome_align/1.cactus_result/18samples/workdir/node-670ee851-7d12-442d-873f-75c2673999da-d87eb9ff-6a1a-480f-8a07-8b0007f85552/tmp3s2ph99j' [2020-11-19T09:56:19+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host localhost.localdomain <========= username_1: Sorry, I'm still not sure what's going on. As the error says, it's failling to load a temporary file. Perhaps it's running out of disk, though in my experience that usually manifests with a more explicit message. username_2: HI, I have met the same problem , Do you got the way to solve it ? @username_1 @username_0 ...................................................... ...................................................... 2021-03-18 17:20:49.488538: Running the command: "cactus_fasta_softmask_intervals.py --origin=one /gpfs/home/tuxl/project/20210106_kangaroo_denovo/03.evo/01.result/result/cactus/workdir/node-eca5af6e-7a7c-4400-a5e5-52a475662774-e10f5528880c419688c859bb88739f3d/tmpfbu47kxk/d3bb3511-ec3e-41f2-a9c0-3d07412fcf9f/tmgevnf_f/node-d93dfd25-a718-4bdd-bdc7-05586f25f61e-e10f5528880c419688c859bb88739f3d/tmpq0te_a2d/adf2861a-050e-44bd-9638-954ff2392595/tmpzkk7wiz7.tmp" 2021-03-18 17:20:49.615369: Successfully ran: "cactus_fasta_softmask_intervals.py --origin=one /gpfs/home/tuxl/project/20210106_kangaroo_denovo/03.evo/01.result/result/cactus/workdir/node-eca5af6e-7a7c-4400-a5e5-52a475662774-e10f5528880c419688c859bb88739f3d/tmpfbu47kxk/d3bb3511-ec3e-41f2-a9c0-3d07412fcf9f/tmgevnf_f/node-d93dfd25-a718-4bdd-bdc7-05586f25f61e-e10f5528880c419688c859bb88739f3d/tmpq0te_a2d/adf2861a-050e-44bd-9638-954ff2392595/tmpzkk7wiz7.tmp" in 0.1143 seconds Got exit code 1 (indicating failure) from job _toil_worker toil_call_preprocess file:/gpfs/home/tuxl/project/20210106_kangaroo_denovo/03.evo/01.result/result/cactus/jobstore kind-toil_call_preprocess/instance-d0l22oq4. Job failed with exit value 1: 'toil_call_preprocess' kind-toil_call_preprocess/instance-d0l22oq4 The job seems to have left a log file, indicating failure: 'toil_call_preprocess' kind-toil_call_preprocess/instance-d0l22oq4 Log from job kind-toil_call_preprocess/instance-d0l22oq4 follows: =========> rkdir/node-eca5af6e-7a7c-4400-a5e5-52a475662774-e10f5528880c419688c859bb88739f3d/tmpfbu47kxk/d3bb3511-ec3e-41f2-a9c0-3d07412fcf9f/tmgevnf_f/node-d93dfd25-a718-4bdd-bdc7-05586f25f61e-e10f5528880c419688c859bb88739f3d/tmpc_zuc8vs/worker_log.txt [2021-03-18T16:58:50+0800] [MainThread] [I] [toil.worker] Redirecting logging to ...................................................... ...................................................... 'LastzRepeatMaskJob' kind-LastzRepeatMaskJob/instance-2fe0n4e0, 'LastzRepeatMaskJob' kind-LastzRepeatMaskJob/instance-xjsjsh5y, 'LastzRepeatMaskJob' kind-LastzRepeatMaskJob/instance-7y96x2w5, 'LastzRepeatMaskJob' kind-LastzRepeatMaskJob/instance-340xmqcv, 'LastzRepeatMaskJob' kind-LastzRepeatMaskJob/instance-w2qiibq2 Traceback (most recent call last): File "/gpfs/home/tuxl/software/genome/cactus-bin-v1.3.0/venv/lib/python3.6/site-packages/toil/worker.py", line 368, in workerScript job._runner(jobGraph=jobGraph, jobStore=jobStore, fileStore=fileStore, defer=defer) File "/gpfs/home/tuxl/software/genome/cactus-bin-v1.3.0/venv/lib/python3.6/site-packages/toil/job.py", line 1424, in _runner returnValues = self._run(jobGraph, fileStore) File "/gpfs/home/tuxl/software/genome/cactus-bin-v1.3.0/venv/lib/python3.6/site-packages/toil/job.py", line 1361, in _run return self.run(fileStore) File "/gpfs/home/tuxl/software/genome/cactus-bin-v1.3.0/venv/lib/python3.6/site-packages/toil/job.py", line 1565, in run rValue = userFunction(*((self,) + tuple(self._args)), **self._kwargs) File "/gpfs/home/tuxl/software/genome/cactus-bin-v1.3.0/venv/lib/python3.6/site-packages/cactus/progressive/cactus_prepare.py", line 770, in toil_call_preprocess cactus_call(parameters=cmd) File "/gpfs/home/tuxl/software/genome/cactus-bin-v1.3.0/venv/lib/python3.6/site-packages/cactus/shared/common.py", line 1357, in cactus_call raise RuntimeError("Command {} exited {}: {}".format(call, process.returncode, out)) RuntimeError: Command ['cactus-preprocess', '/gpfs/home/tuxl/project/20210106_kangaroo_denovo/03.evo/01.result/result/cactus/workdir/node-eca5af6e-7a7c-4400-a5e5-52a475662774-e10f5528880c419688c859bb88739f3d/tmpfbu47kxk/d3bb3511-ec3e-41f2-a9c0-3d07412fcf9f/tmgevnf_f/js', '--inPaths', '/gpfs/home/tuxl/project/20210106_kangaroo_denovo/03.evo/01.result/result/cactus/fa/G.gallus.sm.fa', '--outPaths', 'G.gallus.sm.fa', '--workDir', '/gpfs/home/tuxl/project/20210106_kangaroo_denovo/03.evo/01.result/result/cactus/workdir/node-eca5af6e-7a7c-4400-a5e5-52a475662774-e10f5528880c419688c859bb88739f3d/tmpfbu47kxk/d3bb3511-ec3e-41f2-a9c0-3d07412fcf9f/tmgevnf_f', '--maxCores', '200', '--maxDisk', '3.9 T', '--maxMemory', '500.0 G', '--realTimeLogging', '--logInfo', '--retryCount', '0', '--binariesMode', 'local'] exited 1: stdout=None [2021-03-18T17:21:08+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host cu54
goofball222/unifi
996604292
Title: Existing Devices Not Adopted on New Server Question: username_0: Distributor ID: Debian Description: Debian GNU/Linux 9.13 (stretch) Release: 9.13 Codename: stretch Docker version 19.03.15, build 99e3ed8919 Issue: I had to recently migrate over to a new system and ran rsync to mirror old docker files and docker-compose files into a new system with the same IP address and then pulled a new image and rebuilt the dockers. All settings are there but the switch and AP that I have are not adopted. I have docker-compose set for separate mongo and unifi containers. Answers: username_1: Is the UniFi controller interface accessible on the new server? Are there any errors in the container logs for either the mongo or unifi service that might indicate what the issue is? As long as the Docker host IP, hostname (if configured previously), and UniFi data/database is the same the devices should just reconnect, although it sometimes takes a few minutes for them to re-establish STUN if the controller is layer 3 remote. username_2: well, same problem here. after some dig, I found my AP tries to connect to docker's private address. (172.19.0.4 in my environment) I'm no idea so far except change it to host network. (no I don't want to do this) Please give me a little help, let me konw what's I missed. username_2: open "settings" - "controller", set "Controller Hostname/IP" as host IP, check "Override inform host with controller hostname/IP" it works for me. username_0: Thanks @username_2 that worked. For anyone else having issues this is what I did to migrate over to new system. Set Controller Inform Address- - Settings > System > Application Configuration > Override Inform Host - Checked > Host for Inform - Set to IP address of Host machine (not docker IP) Set Inform address on Ubiquiti Devices: - SSH out to all Ubiquiti Devices > authenticate to device (this could be the default login or what you set in the controller) - run "inform" to check inform address - run "Set-Inform http://<IPAddressOfHost>:8080/inform" if the inform address is not the IP address of the Host machine If any device doesn't automatically connect back into the host you may need to do the following: - SSH out to device - Forget the device in Controller - Run "set-default" or use the reset button to reset the device - SSH back out to the device (login will be the default Username: ubnt and password: <PASSWORD>) - run "Set-Inform http://<IPAddressOfHost>:8080/inform" - It should then show up in the controller after a few minutes, when it does you can adopt it Status: Issue closed
alin23/Lunar
714274414
Title: Lunar Diagnostics Report [E0350C2C3AAF66B4315F436D0D36948B649B3A4E9509A60ECAA32C870F735C99] [3.2.3] Question: username_0: # Issue details - Was the diagnostics process able to change the brightness on your external monitor(s)? No - Mac device where Lunar is installed (Macbook Pro 2019, iMac, Mac Mini, Hackintosh etc.): Macbook Pro 13", 2016 - Monitor connection to the Mac device (HDMI-to-USB-C, USB-C-to-USB-C, miniDisplayPort-to-DisplayPort etc.): USB-C-to-DisplayPort - Using an USB Docking Station or Hub: no - Lunar mode used (check it in the top-right corner of the Lunar interface) sync - (only if you know how to compile a C program) Does this utility work for you? - [ ] https://github.com/kfix/ddcctl ddctl allows me to run -rbc (reset brightness and contrast), also does switch the input (-i) but it doesnt set contrast or brightness by value # Issue description: Brightness and Contrast is not possible to set Answers: username_1: If ddcctl does not work, it's either an issue with the monitor (no support for DDC writes) or with the connection (try another connector on the monitor maybe?). Lunar uses the same DDC implementation as ddcctl, so if that doesn't work, Lunar can't work either. I'll close this as it isn't a bug in Lunar, feel free to post updates on what you've tried and what results you get. Status: Issue closed username_0: I managed to get ddcctl and lunar working, with the same connector. Settings at the monitor to set: **Luminance** * Eco mode: _Standard_ * DCR: _Off_
blockframes/blockframes
740743557
Title: Large white area on short page and sidenav opened Question: username_0: Go to a page which content doesn't cover the full screen (e.g. wishlist if you don't have too many wishes), then open the sidenav. ![image](https://user-images.githubusercontent.com/27687382/98813703-ebe4e000-2424-11eb-803a-cffba61855e6.png) I think something changed to the footer because the footer used to be always just below the screen view, now it's directly below the content. Answers: username_1: @username_0 I confirm, this has changed since prod still has the previous behaviour you described. ![Uploading image.png…]() Older behaviour was better, I don't know what changed that. @GrandSchtroumpf do you know where it comes from? Status: Issue closed username_1: fixed now :)
jared-hughes/desmodder-video-creator
897494532
Title: Top bar cut off Question: username_0: In expanded view, it appears that the top of the shadow does not reach fully to the top of the screen. ![image](https://user-images.githubusercontent.com/20214911/119053318-c3f9aa80-b97a-11eb-850a-e909f6fa8137.png)
Yeregorix/AutoPickup
371041235
Title: Reasoning behind this plugin? Question: username_0: Hey, I was wondering the reasoning behind the plugin, is this focused on performance or just a luxury of not waiting for blocks/xp to be picked up by the player. Answers: username_1: Both ! 😃 It's mainly to give to the players the luxury of not waiting for items and xp but it also slightly improves performance because it avoid the creation of item and xp entities. Status: Issue closed
TencentCloud/tencentcloud-sdk-nodejs
590011810
Title: in vue2.6.10 and webpack4.29.6 project ,Can't resolve 'fs' Question: username_0: $: npm run dev ./node_modules/[email protected]@request/lib/har.js Module not found: Error: Can't resolve 'fs' in '/Users/ryan/main/works/jxlife/Jiyibao_mobile/node_modules/[email protected]@request/lib' Answers: username_1: 这个是给nodejs用的,不是给前端用的。你最好是使用nodejs接入腾讯云,然后以request api的方式来请求nodejs的服务。 username_2: 确实如此,另一个问题是目前不支持跨域,即使是腾讯云官网也是前端js发往后端,后端处理逻辑再请求api Status: Issue closed
jlippold/tweakCompatible
413656739
Title: `NFCWriter X` not working on iOS 12.1 Question: username_0: ``` { "packageId": "net.limneos.nfcwriterx", "action": "notworking", "userInfo": { "arch32": false, "packageId": "net.limneos.nfcwriterx", "deviceId": "iPhone8,1", "url": "http://cydia.saurik.com/package/net.limneos.nfcwriterx/", "iOSVersion": "12.1", "packageVersionIndexed": false, "packageName": "NFCWriter X", "category": "Utilities", "repository": "Limneos Repo", "name": "NFCWriter X", "installed": "0.4-81", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "net.limneos.nfcwriterx", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "NFCWriter X for iOS", "latest": "0.4-81", "author": "<NAME>", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "not working", "notes": "" } ```<issue_closed> Status: Issue closed
cmv/cmv-app
801603117
Title: Save Layer Visibility State in URL Query Params Question: username_0: I'm new to dojo so I'm not sure my code will compile, but I have a working method to save the current operational layers (not basemap) in the window URL query parameters. This means you can send someone a link and it will have the layers turned on that are in the URL query. I'm sure there are bugs to this as it is early days in testing, but if anyone is interested in this and can help with a pull request or can look over the code let me know. Thanks Answers: username_0: Are the layer ids forced to be unique across all layers? I am using the ids in this instance. username_0: Looks like there is already a community widget for this...I'll have to check that out. Status: Issue closed
Door43/translationStudio2
69289139
Title: Verify Compression for JSON Question: username_0: The web server is set to compress in-transit JSON payloads that are in excess of 10K if the client accepts that. 1. Does tS support that? 2. Is it working? 3. Should we lower the threshold to 1K? Answers: username_1: @username_0 we could probably lower the threshold on this. Could you give me an example of a compressed file so I can test it in the app? I'm guessing it'll probably work fine if it's normal server to client compression e.g. gzip. username_0: Any file on the api larger than 10K should be automatically compressed (yes it is the "normal server to client compression"). Try https://api.unfoldingword.org/ts/txt/2/1pe/en/ulb/source.json. username_1: this is working. Is there a reason why you are just compressing files larger than 10K? Why not compress everything? Status: Issue closed username_0: Great. I've lowered the threshold to 1k.
jaegertracing/jaeger-operator
1137111087
Title: jaeger-operator not creating jaeger agent when creating jaeger instance after installation Question: username_0: **Describe the bug** jaeger-operator not creating jaeger agent when creating jaeger instance after installation of jaeger operator **To Reproduce** I am using the following file to just introduce ES storage to be used by collector and query ``` apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod-es spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://search-test-g7fbo7pzghdquvvgxty2pc6lqu.us-east-2.es.amazonaws.com index-prefix: jaeger-span username: test password: <PASSWORD> ``` **Expected behavior** jaeger agent, collector, query components should be created. Collector and query components are created but **jaeger agent** component is **not** being created i then deployed my app manifest file with setting auto inject to true ... but since agent is not created hence no traces are being sent to the provided collector. my app deployment file: ``` apiVersion: apps/v1 kind: Deployment metadata: name: node-app1 annotations: "sidecar.jaegertracing.io/inject": "true" spec: selector: matchLabels: app: node-app1 template: metadata: labels: app: node-app1 spec: containers: - name: node-app1 image: mycontainerstore.azurecr.io/node-web-app ports: - containerPort: 3100 protocol: TCP ``` **Screenshots** ![image](https://user-images.githubusercontent.com/39674269/153848458-b4af02f1-55f2-47be-bd2d-d7d5ea062312.png) **Version (please complete the following information):** - OS: AKS Azure kubernetes cluster service - Jaeger version: 1.31.0 - Deployment: Kubernetes
auspicious3000/autovc
444740331
Title: AttributeError: 'numpy.ndarray' object has no attribute 'numpy' Question: username_0: AttributeError Traceback (most recent call last) <ipython-input-21-1ec7926c8eff> in <module>() 15 c = spect[1] 16 print(name) ---> 17 waveform = wavegen(model, c=c) 18 librosa.output.write_wav(name+'.wav', waveform, sr=16000) /content/autovc/synthesis.py in wavegen(model, c, tqdm) 46 47 """ ---> 48 c = c.numpy() 49 50 model.eval() AttributeError: 'numpy.ndarray' object has no attribute 'numpy' Answers: username_1: Error fixed username_0: thanks i also added #g_checkpoint = torch.load('autovc.ckpt') g_checkpoint = torch.load('/content/autovc/autovc.ckpt',map_location='cuda:0') in converter.ipynb line 18 which was giving a error also how can i test this with my own source and target wav files? username_1: As authors, we certainly appreciate your concerns, and we thank you for your interest in our method. On the other hand, as researchers from IBM, while we are committed to ensuring that our results can be verified and reproduced, we also must balance those considerations against potential misuse of the technology, per our company's policy. We believe that withholding a small fraction of code is a workable compromise that allows the community to nonetheless easily understand and evaluate the method. If you'd like to discuss further, please contact us at <EMAIL>. Status: Issue closed
onnx/onnx
713125472
Title: ONNX Reshape operator target shape data does not account for possible endian architecture differences Question: username_0: E RuntimeError: Inferred shape and existing shape differ in dimension 0: (72057594037927936) vs (1) /usr/local/lib/python3.7/dist-packages/onnx-1.7.0-py3.7-linux-s390x.egg/onnx/shape_inference.py:35: RuntimeError ----------------------------------------------------------- Captured stderr call ----------------------------------------------------------- (op_type:Softmax, node name: ): Inferred shape and existing shape differ in dimension 0: (72057594037927936) vs (1) ``` Notice the big integers: 72057594037927936. Viewing the number in hex shows an endian swapping issue. Attached is a patch file with the source code changes that work around the endian issue with the `Reshape` operator target shape information being stored in the raw data binary format, but the storing of inference annotation should be discussed as an architecture issue. This type of meta-data should be stored as proto-buf data to remove the hidden endian architecture issues. reshape-op-endian.patch.zip Status: Issue closed Answers: username_0: Supplying piecemeal patches to the code base is not a workable solution for adding big endian support to the code base. Adding big endian support is an architectural design issue. Possibly this is only a downstream consumer issue to be solved in the downstream code base. Another possible design is the preprocessing of models, but how to identify the endian data bits is a problem to solve. Closing the issue with these notes.
springdoc/springdoc-openapi-gradle-plugin
1038292755
Title: Yaml spec file via api-docs.yaml url is not having parameters for fetching by groups using GroupedOpenApi Question: username_0: When we use the plugin to generate yaml specs from code using the gradle task we are not being able to specify a URL parameter to fetch it using groups. We are only being able to download all the apis in a single file. We use `groupedOpenApi` to achieve groups, it would be nice to have a way to generate yaml format spec for those specific groups. Status: Issue closed Answers: username_1: for groups, use `groupedApiMappings` property.
Wynncraft/Issues
174984934
Title: Mana Potion Glitched Question: username_0: Self explanatory http://i.imgur.com/47ZFzO6.png Answers: username_1: If you switch to another world is the merchant still glitched? username_0: Still Broken alright username_2: This is just a coding error, it's still should be the correct potion. username_3: I just tried buying/using those mana potions and they didn't work. They just act as regular water bottles. Right clicking them doesn't do anything, and holding right click drinks them like in vanilla minecraft. Images of potions: [Mana3](https://puu.sh/sEFIF/d8e99db6fc.png) [Mana4](https://puu.sh/sEFII/365645a21b.png) Video of trying to use Mana3 to no effect: https://www.youtube.com/watch?v=tPWKl1MdhYA username_2: Okey, @username_6 , can you fix this, or does it need to be passed up to admin? username_4: This is still an issue. I have seen this within the last 48 hours username_5: Issue has been fixed. Status: Issue closed
wso2/product-is
624707427
Title: Tenant qualify SCIM2 response content URLs Question: username_0: When tenant context is enabled, - Use ServiceURLBuilder to build the SCIM2 endpoint URLs - Get tenant domain from context - Tenant qualify resource type URLs in the response - Tenant qualify the response location header<issue_closed> Status: Issue closed
sul-dlss/exhibits
207333768
Title: Sorting of box/folder and other fields that may be integers Question: username_0: Transferring projectblacklight/spotlight#1445 In some exhibits (such as Feigenbaum), there are fields (like box and folder) that only have integer values. Because these fields may hold non-numeric values in other exhibits, we use string field types to store in solr. This leads to odd sorting behavior when the user selects A-Z sort, e.g. 1 10 11 12 13 14 15 16 17 18 19 2 20 etc. Consider alternatives to how this can work better for exhibits that really treat these fields as numbers to improve the user experience (for example, copy fields into alternate numeric solr fields that can be optionally shown for only those exhibits).
indiesquidge/dinner_dash
60566918
Title: Unauthenicated User can log in, which does not clear the cart Question: username_0: As an Unauthenticated User When I visit /login And I enter my credentials to log in Then I am redirected to the /cart page And my cart should be unchanged by the login As an Unauthenticated User When I visit /login And I enter my credentials incorrectly Then I should see a flash message “Incorrect username or password, try again” And my cart should be unchanged by the login failure And I should still be on the /login page<issue_closed> Status: Issue closed
axios/moxios
685527376
Title: Is there any plan to release the next version? Question: username_0: Hey, I'm using this mock library and I see that the needed changes for my case in this merged PR -> https://github.com/axios/moxios/pull/35 Can I get some estimation on when this change will be landed? Without this PR you can't mock multiple requests with the same URL if the method is the only change between them. Thanks Answers: username_0: Closing this for now. Anyone who wants to mock a url with the combination of the method can use this API: ![image](https://user-images.githubusercontent.com/7091543/107148305-28a87080-695b-11eb-803a-434b59864fbd.png) Status: Issue closed
jlippold/tweakCompatible
413673909
Title: `Muze 3` working on iOS 12.0.1 Question: username_0: ``` { "packageId": "com.hackyouriphone.muze3", "action": "working", "userInfo": { "arch32": false, "packageId": "com.hackyouriphone.muze3", "deviceId": "iPhone10,2", "url": "http://cydia.saurik.com/package/com.hackyouriphone.muze3/", "iOSVersion": "12.0.1", "packageVersionIndexed": false, "packageName": "Muze 3", "category": "HYI - Theme HD", "repository": "HackYouriPhone", "name": "Muze 3", "installed": "1.1", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.hackyouriphone.muze3", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "Colourful explosions.", "latest": "1.1", "author": "Purdixx", "packageStatus": "Unknown" }, "base64": "<KEY>", "chosenStatus": "working", "notes": "" } ```
CloverHackyColor/CloverBootloader
542533086
Title: ./buildme Fails Under Catalina Question: username_0: A number of linking errors occured: ``` ...skipping... make: _main in nasm.o *** [rdoff/rdf2bin] Error 1 "_symtabInsert", referenced from: "_src_free", referenced from: _processmodule in ldrdf.o _main in nasm.o make: "_globalbits", referenced from: _main in nasm.o *** [rdoff/rdfdump] Error 1 _main in ldrdf.o "_rdfloadseg", referenced from: _processmodule in ldrdf.o _main in ldrdf.o "_add_seglocation", referenced from: _main in ldrdf.o "_rdfnewheader", referenced from: _main in ldrdf.o ld: symbol(s) not found for architecture x86_64 "_nasm_free", referenced from: _main in ldrdf.o "_done_seglocations", referenced from: _main in ldrdf.o clang: error: "_rdfgetheaderrec", referenced from: linker command failed with exit code 1 (use -v to see invocation) _processmodule in ldrdf.o _main in ldrdf.o "_nasm_malloc", referenced from: _processmodule in ldrdf.o _main in ldrdf.o _loadmodule in ldrdf.o "_nasm_strdup", referenced from: _processmodule in ldrdf.o _main in ldrdf.o _loadmodule in ldrdf.o "_symtabFind", referenced from: _processmodule in ldrdf.o _main in ldrdf.o "_rdl_openmodule", referenced from: _main in ldrdf.o "_fwriteint32_t", referenced from: _main in ldrdf.o ld: symbol(s) not found for architecture x86_64 make: *** [rdoff/rdflib] Error 1 clang: error: linker command failed with exit code 1 (use -v to see invocation) clang: error: linker command failed with exit code 1 (use -v to see invocation) make: *** [nasm] Error 1 make: *** [rdoff/ldrdf] Error 1 ``` Also tried the commands `./buildme XCODE8` and `./buildme XCODE11` yield the same results. Am I doing something wrong, or does the build fails under Catalina? Using OSX 10.15.2, XCode 11.3 with the additional tools. Any help would be greatly appreciated. [nasm.make.log.txt](https://github.com/CloverHackyColor/CloverBootloader/files/4002423/nasm.make.log.txt) Answers: username_1: Hi, just tried and nasm perfectly build fine in 10.15.2 and Xcode 11.3. Not sure what can be the problem for you. username_0: I've got it working now. Had to remove Xcode completely, reinstall it, with the "Xcode Extensions" and then mysteriously everything worked. Did not see any difference in Xcode version whatsoever. Status: Issue closed
dwwoelfel/oneblog-test
541349051
Title: On Work Question: username_0: _Originally published on February 22nd, 2014_ I had the pleasure to meet <NAME> today. You could tell the enthusiasm he had for what he was doing, and you could also tell something that’s very rare in people. Whether we talked about clients, or methods of learning, he was focused on being excellent. From stories of making many, many variations, to the way he prefered to work with people, it was all about striving to make something great together. A lot of designers aim to just get their work done, but being excellent is much more then that. It’s being on the same team as your clients, doing the best you can with them, on the same team. Standing up for your decisions, coming in with curiosity and a drive to create something together.
thymikee/jest-preset-angular
241010173
Title: Expected 'styles' to be an array of strings. Question: username_0: Running into this error: `Expected 'styles' to be an array of strings.` After we changed our Angular component definitions to the following: ```TypeScript @Component({ selector: 'summary', styles: [require('./summary.scss')], template: require('./summary.html') }) ``` I am still wrapping my head around the difference between WebPack compiling and running our webapp, and Jest doing it's own transforms and TypeScript compilation, but I suspect it has to do with the preprocessor, am I right ? I tried changing the following line: `const STYLE_URLS_REGEX = /styleUrls:\s*\[\s*((?:'|").*\s*(?:'|")).*\s*.*\]/g;` to this line: `const STYLE_URLS_REGEX = /styles:\s*\[\s*((?:'|").*\s*(?:'|")).*\s*.*\]/g;` to no avail 🤓 halp! Answers: username_1: Is this even valid way to provide styles? Why can't you use styleUrls? username_0: I get the same error if I use `styleUrls` :( `Expected 'styleUrls' to be an array of strings.` username_1: Yes, because you need to pass array of strings ``` styleUrls: ['path/to/style', './other/path'] ``` username_0: That makes sense. But as I am working on getting testing working with Jest (and apart from this it works great, with help of your preset package) and our webapp compiles and runs just fine using WebPack, I'm trying to figure out what the exact difference is here username_1: Because we're unit/integration testing, we want to styles to be stubbed (and eventually replaced by something like `styles: []`). This regex will work for single require call. ```js /styles:\s*\[\s*require\((.*)\).*\s*.*\]/g ``` Just tweak it to work with multiple requires and you'll be good :) username_0: Thanks! this seems to indeed have been the problem! I used the following regex: ```regex styles:\s*\[\s*require\((.*)\).*] ``` Status: Issue closed
haskell/haskell-language-server
745388945
Title: Smarter import suggestion with defaults Question: username_0: The code action to add imports is great and I use it all the time, but my interactions with it can sometimes be a little repetitive. For example, if I use the type `Int32` HLS suggests a litany of potential imports, but every time I use this name I want to import `Data.Int` from `base`. There are a host of these "defaults" and it would be great to inform HLS of these to avoid cluttering the code actions with options I'm never going to choose. ![image](https://user-images.githubusercontent.com/857308/99494831-a5efc500-29ac-11eb-8020-04ae6b8df161.png) The defaults could be specified either as: - A name and the corresponding import statement, in the above example: `('Int32, "import Data.Int")` - Just a module name `Data.Int` with the meaning of, if this exists at all in the suggestions, don't suggest anything else On a similar topic, it would be nice to specify "canonical" imports for types, for example I'm sure that despite `Int32` being reexported from `Foreign`, `Foreign.Safe`, `GHC.Int` and `UnliftIO.Foreign`, nobody ever imports it from those modules (specifically, i.e. not with an open import). Does HLS have information at code-action generation time on the provenance of the symbols in the imports it's suggesting? In the above example this would present the import options from `Data.Int` and the `Vulkan` imports (assuming the user hasn't specified a canonical import for `Vulkan.Core10.CommandBufferBuilding.ClearColorValue.Int32`, in which case that canonical module would be returned). I'm not sure that having a built in list of these default would be sensible, especially when alternative preludes get involved, but it would be nice to have some configuration that users could opt into easily to avoid people duplicating large amounts of the specifications in their configs. Other motivating examples: - Anything from `Data.Traversable`, `Data.Foldable`, `Data.Maybe`. Basically any import from base which doesn't gobble up common names. - I quite often like: `import Data.Text (Text); import Data.Text qualified as T` (would have to present the options for `Data.Text.Lazy` too for when the user wants that) - Similarly for `import Data.Vector(Vector); import Data.Vector qualified as V` Answers: username_1: The mechanism from https://github.com/haskell/ghcide/pull/861 should allow this
fionajessica23/rea_challenge
328949844
Title: Consider removing unused packages [1 effort - 2 value] Question: username_0: Try and find what packages are not being used and do a `yarn remove [package name]` on them. Nodemon as an example does not appear to be used. https://github.com/username_1/rea_challenge/blob/422f8e48921747459a9337e4f545a3f87298b205/package.json#L24 Answers: username_1: 👍 Status: Issue closed
hazelcast/charts
491894278
Title: Configuring service-dns fails when installed from helm chart Question: username_0: When run the Hazelcast server using helm chart with service-dns value configured fails with the following error Caused by: com.hazelcast.config.InvalidConfigurationException: Properties 'service-dns' and ('service-name' or 'service-label-name') cannot be defined at the same time I override the config properties from the parent chart as follows ``` hazelcast: cluster: memberCount: 2 service: clusterIP: "None" hazelcast: rest: true yaml: hazelcast: network: join: multicast: enabled: false kubernetes: enabled: true service-dns: ${serviceName} management-center: enabled: ${hazelcast.mancenter.enabled} url: ${hazelcast.mancenter.url} ``` I tried overriding the service-name property to null but it is still not working. Answers: username_1: I tried the same and it does not work. However, when I fetched the helm chart and modified locally, `helm install .` creates correct ConfigMap. username_1: I checked that one again and I see it is similar to that issue. https://github.com/helm/helm/issues/5534 Apparently, When you do `helm install hazelcast/hazelcast -f values.yaml`, helm merges both local and remote yaml file, we need to set `service-name: null` as @username_0 did. local values.yaml ``` yaml: hazelcast: network: join: multicast: enabled: false kubernetes: enabled: true service-dns: ${serviceName} service-name: null management-center: enabled: ${hazelcast.mancenter.enabled} url: ${hazelcast.mancenter.url} ``` It is good so far but then we receive an exception with that configuration and it is thrown by Hazelcast. ``` xception in thread "main" com.hazelcast.config.InvalidConfigurationException: The configuration entry under hazelcast/network/join/kubernetes/service-name is null. Please check if the provided YAML configuration is well-indented and no blocks started without sub-nodes. at com.hazelcast.config.yaml.YamlDomChecker.reportNullEntryOnConcretePath(YamlDomChecker.java:71) at com.hazelcast.config.yaml.YamlDomChecker.check(YamlDomChecker.java:47) at com.hazelcast.config.yaml.YamlDomChecker.check(YamlDomChecker.java:50) at com.hazelcast.config.yaml.YamlDomChecker.check(YamlDomChecker.java:50) at com.hazelcast.config.yaml.YamlDomChecker.check(YamlDomChecker.java:50) at com.hazelcast.config.YamlConfigBuilder.parseAndBuildConfig(YamlConfigBuilder.java:153) at com.hazelcast.config.YamlConfigBuilder.build(YamlConfigBuilder.java:131) at com.hazelcast.config.YamlConfigBuilder.build(YamlConfigBuilder.java:122) at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:136) at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:91) at com.hazelcast.core.server.StartServer.main(StartServer.java:46) ``` Apparently, we do not allow null passing in yaml files to Hazelcast Configuration. Do you have any suggestions here @username_2 @blazember ? username_1: I had a quick chat with @blazember I think we need to fix this in hazelcast-kubernetes plugin by introducing a new parameter `discovery-mode: API | DNS` and deprecating `service-dns`. If `discovery-mode` is `DNS` then `service-name` can be used instead of `service-dns` username_2: The workaround for this (or maybe even a proper solution) is to set `service-name: ` to an empty string. I think we'll not introduce `discovery-mode API | DNS`, but rather extract the DNS Lookup discovery as a separate modele (or include in hazelcast/hazelcast). username_3: For completeness, I'm using Hazelcast 3.12.6 deployed with its helm chart version 2.10.0 on a Rancher cluster (version 2.2.8). This is the YAML file passed with custom values: ```yaml # Hazelcast custom Helm chart template --- image: repository: "nexus.internal/hazelcast/hazelcast" tag: "3.12.6" cluster: memberCount: 1 metrics: enabled: true rbac: enabled: false serviceAccount: create: false mancenter: enabled: false hazelcast: yaml: hazelcast: group: name: tomcat network: join: kubernetes: enabled: true service-dns: ${serviceName}.${namespace}.svc.cluster.local service-name: ``` (note again the white space after the ":" of `service-name`). Of course, by setting either: - `service-name: ""` - `service-name: ''` (two single quotes) - leaving completely away the `service-name` entry Hazelcast claims about the incorrect usage of `service-name` together with `service-dns`. This means that the yaml passed above is syntactically correct. Any idea/feedback? username_2: I think we need to fix it in the `hazelcast-kubernetes` plugin to treat empty strings the same as null. Added a "bug" label. username_2: Created an issue in hazelcast-kubernetes: https://github.com/hazelcast/hazelcast-kubernetes/issues/197 username_2: The workaround for this is to create `ConfigMap` with Hazelcast configuration manually and use [existingConfigMap](https://github.com/hazelcast/charts/blob/master/stable/hazelcast/values.yaml#L32). username_1: @username_2 I see k8s [issue](https://github.com/hazelcast/hazelcast-kubernetes/issues/197) is merged and closed so can we close this issue? Do we need any readme update? username_2: Let's close it when Hazelcast `4.1` is released and Helm Chart is updated. username_2: Fixed by #170 Status: Issue closed
omega8cc/boa
151351290
Title: composer.phar error during HEAD installation Question: username_0: When installing a new BOA system with boa in-head (3.1.0-dev) I get the following: ``` BOA [06:13:59] ==> INFO: Installing YAML for PHP 7.0.5... curl: (23) Failed writing body (0 != 16133) mv: cannot stat ‘composer.phar’: No such file or directory BOA [06:14:13] ==> INFO: Installing Limited Shell 0.9.18.3... BOA [06:14:26] ==> INFO: Installing Redis update for Debian/jessie... ``` Installation started like this: boa in-head public DOMAIN EMAIL OCTOPUS none php-7.0 Could be nothing, but I'm mentioning it anyway. Answers: username_0: Update: I get the same error on stable install too, with no platforms and php-7.0 username_1: Could be just YAML compatibility issue. Status: Issue closed username_1: It was failed composer install, probably due to temporary connectivity problems, because I can't reproduce this. Here is the relevant code: ``` install_update_composer() { if [ -x "/usr/local/bin/composer" ]; then /usr/local/bin/composer self-update &> /dev/null fi if [ ! -x "/usr/local/bin/composer" ] || [ ! -L "/usr/bin/composer" ]; then rm -f /usr/local/bin/composer rm -f /usr/bin/composer rm -rf /root/.composer mkdir -p /var/opt cd /var/opt curl -sS https://getcomposer.org/installer | php &> /dev/null mv composer.phar /usr/local/bin/composer ln -sf /usr/local/bin/composer /usr/bin/composer fi } ``` username_1: Try this manually on the server for testing: `$ cd /var/opt` `$ curl -sS https://getcomposer.org/installer | php` username_1: Ah, it was yet another side effect of using `php-7.0`
aliakseis/FFmpegPlayer
448455768
Title: render nv12 issues Question: username_0: Hello, can I use a simple SDL player to play nv12 data without hard decoding nv12, why use USE_HWACCEL hard decoding to convert to nv12, SDL playing rendering is not normal. My code is as follows: #include <stdio.h> #include "windows.h" #include "ffmpeg_dxva2.h" #define __STDC_CONSTANT_MACROS extern "C" { #include "libavcodec/avcodec.h" #include "libavformat/avformat.h" #include "libswscale/swscale.h" #include "libavdevice/avdevice.h" #include "libavutil/avutil.h" #include "libavutil/imgutils.h" #include "libavfilter/avfilter.h" #include "SDL/SDL.h" }; #define USE_HWACCEL 0 static FILE *output_file = NULL; int main(int argc, char* argv[]) { AVFormatContext *pFormatCtx; int i, videoindex; AVCodecContext *pCodecCtx; AVCodec *pCodec; AVFrame *pFrame,*pFrameTarget; AVFrame*sw_frame = NULL; uint8_t *out_buffer; AVPacket *packet; int y_size; int ret, got_picture; struct SwsContext *img_convert_ctx = nullptr; char filepath[]="e:/1.MP4"; //SDL--------------------------- int screen_w=0,screen_h=0; SDL_Window *screen; SDL_Renderer* sdlRenderer; SDL_Texture* sdlTexture; avcodec_register_all(); av_register_all(); avdevice_register_all(); pFormatCtx = avformat_alloc_context(); if(avformat_open_input(&pFormatCtx,filepath,NULL,NULL)!=0) { printf("Couldn't open input stream.\n"); return -1; } if(avformat_find_stream_info(pFormatCtx,NULL)<0){ printf("Couldn't find stream information.\n"); [Truncated] SDL_UpdateTexture(sdlTexture, NULL, pFrameTarget->data[0], pFrameTarget->linesize[0]); SDL_RenderClear(sdlRenderer); SDL_RenderCopy(sdlRenderer, sdlTexture, NULL, NULL); SDL_RenderPresent(sdlRenderer); } SDL_Delay(40); av_frame_unref(sw_frame); } } sws_freeContext(img_convert_ctx); SDL_Quit(); av_frame_free(&pFrameTarget); av_frame_free(&pFrame); avcodec_close(pCodecCtx); avformat_close_input(&pFormatCtx); return 0; } Answers: username_1: I've just tried your code: https://github.com/username_1/SDL-example Seems to be working fine with small changes so far. However the SDL frame transfer functionality does not seem to be efficient, probably some tweaking is required. username_1: Sorry, made an update - there was an issue. It seems to me that yv12 matches decoding needs better than nv12. username_0: Yes, yv12 is normal, thank you very much. username_0: Is the data after hard decoding is yv12? and I tried H. 265 video(HEVC format). Why not support hard decoding? I tried Win7 and Win10 and they weren't working, but not enough dxva2_mode. username_1: Just made some updates for H.265 stuff, probably it is fixed now username_1: BTW here is a portable player prototype: https://github.com/username_1/FFmpegPlayer/tree/master/QtPlayer
robbiet480/cec-web
66409371
Title: Caching Question: username_0: It seems that `/source` and `/info` are both cached for a period of time, and the cache isn't cleared when the routing changes or a device is added/removed. Need to investigate under what situations the cache is cleared and if it can be force cleared on routing changes. Answers: username_0: Also need caching for power & input status
docker/compose
150389990
Title: NameError: global name 'WindowsError' is not defined Question: username_0: After i reinstalling python on mac el capitan, i got the folowing error: `wildan:docker-symfony wildan$ docker-compose up Traceback (most recent call last): File "<string>", line 3, in <module> File "compose/cli/main.py", line 57, in main File "compose/cli/main.py", line 108, in perform_command File "contextlib.py", line 35, in __exit__ File "compose/cli/errors.py", line 52, in handle_connection_errors File "compose/cli/utils.py", line 45, in call_silently NameError: global name 'WindowsError' is not defined docker-compose returned -1` Now i use: docker-compose version 1.7.0, build 0d7bf73 Answers: username_1: Thanks for the report. What command are you executing for this error to appear? username_0: web_1 | Performing system checks... web_1 | web_1 | System check identified no issues (0 silenced). web_1 | Traceback (most recent call last): web_1 | File "/usr/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper web_1 | fn(*args, **kwargs) web_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 117, in inner_run web_1 | self.check_migrations() web_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 163, in check_migrations web_1 | executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS]) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 20, in __init__ web_1 | self.loader = MigrationLoader(self.connection) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 49, in __init__ web_1 | self.build_graph() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 176, in build_graph web_1 | self.applied_migrations = recorder.applied_migrations() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 65, in applied_migrations web_1 | self.ensure_schema() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 52, in ensure_schema web_1 | if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()): web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 231, in cursor web_1 | cursor = self.make_debug_cursor(self._cursor()) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 204, in _cursor web_1 | self.ensure_connection() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection web_1 | self.connect() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 95, in __exit__ web_1 | six.reraise(dj_exc_type, dj_exc_value, traceback) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection web_1 | self.connect() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 171, in connect web_1 | self.connection = self.get_new_connection(conn_params) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 175, in get_new_connection web_1 | connection = Database.connect(**conn_params) web_1 | File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect web_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async) web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused web_1 | Is the server running on host "db" (172.18.0.2) and accepting web_1 | TCP/IP connections on port 5432? web_1 | db_1 | syncing data to disk ... ok db_1 | db_1 | WARNING: enabling "trust" authentication for local connections db_1 | You can change this by editing pg_hba.conf or using the option -A, or db_1 | --auth-local and --auth-host, the next time you run initdb. db_1 | db_1 | Success. You can now start the database server using: username_0: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/bin/docker-compose", line 5, in <module> from pkg_resources import load_entry_point File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/distribute-0.6.14-py2.7.egg/pkg_resources.py", line 2671, in <module> working_set.require(__requires__) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/distribute-0.6.14-py2.7.egg/pkg_resources.py", line 654, in require needed = self.resolve(parse_requirements(requirements)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/distribute-0.6.14-py2.7.egg/pkg_resources.py", line 552, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: enum34>=1.0.4,<2 wildan:docker-ruby wildan$ Status: Issue closed username_3: Try `sudo pip install --upgrade distribute`
OpenVisualCloud/Smart-City-Sample
518215778
Title: Stadium scenario doesn't work Question: username_0: I am deploying this sample with both traffic and stadium scenario to a Kubernetes cluster. The traffic scenario works perfectly but the stadium doesn't show anything. There is no sensor available and no video captured. Do you have any idea? ![Screen Shot 2019-11-06 at 10 35 04](https://user-images.githubusercontent.com/6447444/68266264-2a60f580-0081-11ea-8367-1974faf406ad.png) ![Screen Shot 2019-11-06 at 10 34 51](https://user-images.githubusercontent.com/6447444/68266269-2cc34f80-0081-11ea-97d5-b7225d3f6b64.png) Answers: username_0: The stadium's analytic pod is throwing errors. I think it is related to this issue: `kubectl logs -f stadium-office1-analytics-crowd-7dfbb645b8-mbtqr ERROR:tornado.access:500 POST /pipelines/crowd_counting/2 (127.0.0.1) 2.41ms {"levelname": "ERROR", "asctime": "2019-11-06 11:54:01,584", "message": "Error creating destination: {'source': {'uri': 'rtsp://10.203.155.112:17000/live.sdp', 'type': 'uri'}, 'tags': {'sensor': '6u-9Pm4Bp5BYegUhuFH5', 'location': {'lat': 37.38865, 'lon': -121.95405}, 'algorithm': '4--9Pm4Bp5BYegUhr1F4', 'office': {'lat': 37.39856, 'lon': -121.94866}}, 'parameters': {'every-nth-frame': 6, 'recording_prefix': 'recordings/6u-9Pm4Bp5BYegUhuFH5', 'method': 'mqtt', 'address': 'stadium-office1-mqtt-service', 'clientid': '4--9Pm4Bp5BYegUhr1F4', 'topic': 'aad04d00-0651-41a2-adf9-c3df7d473372'}} 'destination'\n", "name": "DestinationTypes"} ERROR:__main__:Exception on /pipelines/crowd_counting/2 [POST] Traceback (most recent call last): File "/home/video-analytics/app/server/openapi_server/controllers/default_controller.py", line 161, in pipelines_name_version_post pipeline_id, err = PipelineManager.create_instance(name, version, connexion.request.get_json()) File "/home/video-analytics/app/server/openapi_server/../../modules/PipelineManager.py", line 135, in create_instance PipelineManager.start() File "/home/video-analytics/app/server/openapi_server/../../modules/PipelineManager.py", line 144, in start pipeline_to_start.start() File "/home/video-analytics/app/server/openapi_server/../../modules/GStreamerPipeline.py", line 206, in start self._gst_launch_string = string.Formatter().vformat(self.template, [], self.request) File "/usr/lib/python3.6/string.py", line 194, in vformat result, _ = self._vformat(format_string, args, kwargs, used_args, 2) File "/usr/lib/python3.6/string.py", line 234, in _vformat obj, arg_used = self.get_field(field_name, args, kwargs) File "/usr/lib/python3.6/string.py", line 307, in get_field obj = obj[i] KeyError: 'object_detection' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2446, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1951, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1820, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1949, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1935, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/decorator.py", line 73, in wrapper response = function(request) File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/uri_parsing.py", line 132, in wrapper response = function(request) File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/validation.py", line 172, in wrapper response = function(request) File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/validation.py", line 346, in wrapper return function(request) File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/decorator.py", line 44, in wrapper response = function(request) File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/parameter.py", line 126, in wrapper return function(**kwargs) File "/home/video-analytics/app/server/openapi_server/controllers/default_controller.py", line 166, in pipelines_name_version_post logger.error('pipelines_name_version_post ' +e) TypeError: must be str, not KeyError ` username_1: What you see is expected. The stadium scenario is under development. The models are there but they are not yet connected to the sensors. Expect to update soon. username_2: Is camera auto discovery feature (using nmap) complete yet? username_0: Hi @username_1 , Thanks for the information. I have another question about the map? Is it fixed or we can adjust? I thought we just need to update the `lat` and `lon` in [sensor-info.json](https://github.com/OpenVisualCloud/Smart-City-Sample/blob/master/maintenance/db-init/sensor-info.json) but it seems like not enough. After updating, the map remains the same while building and camera disappeared I also found another hard-code [here](https://github.com/OpenVisualCloud/Smart-City-Sample/blob/cfd693fff6819373326c8d04c8b7a743495ae1e0/cloud/html/js/scenario.js#L27) username_1: camera auto discovery is ready. Please see sensor/README.md. username_1: Yes, there are three places you need to update: (1) maintenance/db-init/sensor-info.json. (2) maintenance/db-init/sensor-info.md, (3) the scenario.js file you mentioned. Hold on. I am writing a README on how to extend sensors, offices and MAPs. Yes, all are possible to extend including the maps. username_1: (2) maintenance/db-init/sensor-info.m4 username_2: I checked already, but it didn't work as expected (The pod `traffic-office1-camera-discovery` didn't output anything related, I changed PORT-RANGE and default username/password already). Maybe my cameras doesn't support ONVIF, but that's not likely. I will double check BTW, if it works, what should be displayed on UI? Will simulated cameras still there? username_1: What's your output if you run: PORT_SCAN='-p80-65535 192.168.1.0/24' make discover If there is no camera detection, then your camera does not support ONVIF. username_1: It depends on how you modify sensor-info.json. If you still have the simulated camera provisioning info, then simulated and real IP cameras will be displayed together. If you leave only the IP camera identifiers, then only IP cameras will be displayed. What happens is that any detected cameras will be compared against this provisioning data and only matched cameras will be displayed in the UI. username_0: Hi @username_1 , I changed the files you mentioned above. I can see that the object (building, camera) position is changed. But the map isn't loaded. It is just a blank page. Any idea? ![Screen Shot 2019-11-06 at 13 15 56](https://user-images.githubusercontent.com/6447444/68272959-c564ca00-0097-11ea-84c3-50aeb4f4bd75.png) username_1: Please check out doc/extend.md. You need to build the scenario map, since you move to a different GPS location. The saved map tiles cover only the Hillsboro Oregon area. username_1: This link might help you get some inner details: https://github.com/Overv/openstreetmap-tile-server. The entire earth is pretty big, about 80GB of raw data, to be included into a docker container. Only a small portion is extracted as tiles/pngs. Let me know if you have issues. See you tomorrow. username_0: Hi @username_1 Thanks for the update. I've downloaded the Asia map and tried to use your script to extract data for an area in Singapore with command `./osm_totiles.sh 103.7263 103.7672 1.3097 1.3445`, I also tried with other `lon` and `lat` but the maps always like this. Do you have any idea? ![Screen Shot 2019-11-06 at 16 20 00](https://user-images.githubusercontent.com/6447444/68285435-d9b5c080-00b1-11ea-9319-fe2676004f0d.png) However, the actual area that I selected from Openstreetmap is like this ![Screen Shot 2019-11-06 at 16 21 30](https://user-images.githubusercontent.com/6447444/68285476-ecc89080-00b1-11ea-9254-864052a4c61c.png) Have a nice day! username_1: I tried a smaller version of the pbf: malaysia-singapore-brunei-latest.osm.pbf. This is what I got with your parameters: ![Capture](https://user-images.githubusercontent.com/3871873/68322797-ef5ac400-0078-11ea-8439-df7817bfc772.PNG) There might be some issues of the osm-tile-server to handle big files. You can workaround by downloading a smaller region. I will check if newer version of the tile server solves this problem. username_1: After osm_host.sh, you can look at the tile server to see if your map is properly rendered: http://<your-hostname>:8080. username_0: Thanks @username_1 , It worked after I rerun the title server with smaller set of pbf malaysia-singapore-brunei-latest.osm.pbf. username_1: Nice. I could not make the tile server to work with asia.osm.pbf. The latest version also does not work correctly. Probably time to submit a bug against the tile server repo. username_1: Just FYI. The latest tile server v1.3.8 can render asia.osm.pbf. Status: Issue closed username_1: close for now.
thomasloven/lovelace-state-switch
1036771451
Title: "default" case has stopped working in v1.9+ Question: username_0: I've been using state-switch for quite a while with a `user` state that includes one explicit string that will match and a default case that should be active when the user doesn't match the first case. It's been working great until 1.9 & 1.91, where the default case appears to be ignored and nothing is displayed if there are no explicit matches. I slightly modified the example from the card docs and can reproduce the issue: ``` - badges: [] title: state-switch-test panel: true cards: - type: custom:state-switch entity: user default: default states: Rob: type: entities title: User A stuff entities: - switch.fr_table_lamp Mobile: type: entities title: User B stuff entities: - switch.fr_reading_lamp default: type: markdown content: > ## Unknown user ``` If the logged-in user is "Rob", I see the card for "Rob". If I change "Rob" to "Desktop", no card is displayed. I would expect the default case to take effect and a `markdown` card to be displayed. All testing done with "Flush cache and hard reload" and in Chrome. Answers: username_1: Yeah, seeing the same issue here. Status: Issue closed
pubquick/citfix
384301038
Title: Book title, page no, Location Publisher not capturing Question: username_0: <NAME>., <NAME>., & <NAME>. (2013). Formation scientifique et processus de socialisation au métier de chercheur. In <NAME> & <NAME> (Eds.), L’ Accompagnement des mémoires et des thèses Louvain: Presses Universitaires de Louvain (pp. 177–200).
conda-forge/metis-feedstock
378923636
Title: C++ runtime remains a runtime dependency despite of the ignore_run_exports statement Question: username_0: cf @username_1's comment https://github.com/conda-forge/metis-feedstock/pull/20#discussion_r232060543 conda-render shows the C++ runtime as a runtime dependency Answers: username_1: Use, ``` build: ignore_run_exports: - libstdcxx-ng - libcxx ``` and you'll see that the C++ runtimes are not in run requirements. username_0: Yeah but it feels wrong to use the actual name of the package. username_2: What if we just patch the `CMakeLists.txt` for now? Arguably that's what they should do anyways. 😉 username_0: For reference, I have opened an issue upstream http://glaros.dtc.umn.edu/flyspray/task/167
bigyihsuan/International-Phonetic-Esoteric-Language
643196666
Title: Change `ʟ` to `ɔ` Question: username_0: `ʟ` is a relic from when CONSKIP and JUMP were both L-like sounds `ʎʟ`. Now that skip is `ʌ`, its rounded form `ɔ` should be used. This will break a lot of existing code, but it's a change that frees up a consonant for future use.<issue_closed> Status: Issue closed
jupyter/nbgrader
298022116
Title: Add instructions for increasing autograde timeout in the FAQ Question: username_0: cf https://github.com/jupyter/nbgrader/issues/923#issuecomment-366457887 Answers: username_0: The solution for this issue is to add a note to the FAQ describing the error (that the kernel gets interrupted either during validation or during autograding), and then describing the solution, which is to put something like the following in `nbgrader_config.py`: ``` # increase timeout to 60 seconds c.ExecutePreprocessor.timeout = 60 ``` username_1: Making a fork and PR'ing this 😄 Status: Issue closed
tensorflow/serving
347824520
Title: restful api bad result Question: username_0: my model is signature_def['predict_pair']: The given SavedModel SignatureDef contains the following input(s): inputs['dropout'] tensor_info: dtype: DT_FLOAT shape: unknown_rank name: dropout_keep:0 inputs['left'] tensor_info: dtype: DT_FLOAT shape: (-1, 128, 10) name: input_left:0 inputs['right'] tensor_info: dtype: DT_FLOAT shape: (-1, 128, 10) name: input_right:0 inputs['trainphase'] tensor_info: dtype: DT_BOOL shape: unknown_rank name: is_training:0 The given SavedModel SignatureDef contains the following output(s): outputs['scores'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: output/scores:0 Method name is: tensorflow/serving/predict **then i use the restful api** dl = np.ones((1, 128, 10)) dl = dl.tolist() dr = dl test = np.ones((5,5)).tolist() data = {"signature_name": "predict_pair","instances": [{"left": dl , "right": dr , "dropout": 1.0, "trainphase": False}]} url = '***:predict' response = requests.post(url,json=data) print(response.text) **i get this error, it seems that the trainphase is not processed properly, but i test the model by loading the model and using feed dict and no error occured, so is there something wrong in the restful api?** { "error": "The second input must be a scalar, but it has shape [1]\n\t [[Node: cond_1/Switch_1 = Switch[T=DT_FLOAT, _output_shapes=[[?,10], [?,10]], _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](bn_fm_2/Reshape_1, _arg_is_training_0_3)]]" } Answers: username_1: Please go to Stack Overflow for help and support: https://stackoverflow.com/questions/tagged/tensorflow-serving If you open a GitHub issue, it must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead). Thanks! Status: Issue closed
linsvensson/sensor.greenely
816672945
Title: Installation Question: username_0: Hi Lin, Great work you have done. I am new in this exciting area and probably a bit stupid. I have read your installation guide but it dosent work for me. I download three files from your custom_components/greenely/ into my custom_components/greenely/. I set up the sensor code in my /config/configuration.yaml as you say but it doesn't work. You also say this: _"Using your HA configuration directory (folder) as a starting point you should now also have this: custom_components/greenely/__init__.py custom_components/greenely/sensor.py custom_components/greenely/manifest.json"_ Shall I put these three rows into my /config/configuration.yaml or what don't I understand or do wrong? Thanks in advance. Anders Answers: username_1: Hi Anders, Thanks! It's okay, it takes a while to get into! Are you able to check your Home Assistant log to see if it mentions greenely? You can find it under Configuration -> Logs This part: `_"Using your HA configuration directory (folder) as a starting point you should now also have this:` is just referring to what the folder should look like :) username_0: The log is empty and I can not find any Greenely sensors in my long list of other sensors in my Entitet list. username_0: Don't ask me why but now it works :) username_2: After I restarted my RPi the Greenely sensor showed up. Status: Issue closed
threefoldtech/zos
600195654
Title: implement pagination Question: username_0: First thing to solve is to make sure the explorer can load all the nodes from the explorer API. Which is not the case today: https://github.com/threefoldtech/zos/blob/77b9939bffb2de0fe09657f96194b9842d3e379c/tools/explorer/frontend/src/services/tfService.js#L39-L46 Now doing this brings other questions indeed. How to deal with a huge amount of nodes ? At the moment all the computation of statistics etc is done on the client side. Which I think is a good thing, we do not want to put that burden on the server. So following that train of though it seems we should always load all the nodes in the frontend store. When we would reach a too big amount of nodes that loading them all in memory is not an option anymore. then I think we need to rethink a bit how the server works. Caching is a solution but is is always quite hard to get right. Specially that nodes can get updated data at any time. Not only every X minutes. So long story short, for now I think we can just make all the nodes load in the background when the frontend loads. And then monitor how things evolve with the growing number of node ? Answers: username_1: Not quite sure on what we try to achieve. For now, fetching all the data seems reasonable since we don't have a big userbase. In the future if our usebase gets biggers we can implement caching of the node requests. The node informating does not get updated every minute I think so we can have a cache time of like 3-5 minutes or something. If we don't implement caching following needs to happen: 1. pagenation requests in the nodes table - this creates another issue on the filtering of node ids and resources (since the filter will only be applied on 1 page of nodes) - if we reactivly implement filtering on the nodes, then we still have to scan our entire collection for subsets, this will cause alot of overhead if this is done by multiple users at the same time. 2. compute the node stats in the backend and cache this request (since this request fetches all the nodes to compute the stats) username_0: First thing to solve is to make sure the explorer can load all the nodes from the explorer API. Which is not the case today: https://github.com/threefoldtech/zos/blob/77b9939bffb2de0fe09657f96194b9842d3e379c/tools/explorer/frontend/src/services/tfService.js#L39-L46 Now doing this brings other questions indeed. How to deal with a huge amount of nodes ? At the moment all the computation of statistics etc is done on the client side. Which I think is a good thing, we do not want to put that burden on the server. So following that train of though it seems we should always load all the nodes in the frontend store. When we would reach a too big amount of nodes that loading them all in memory is not an option anymore. then I think we need to rethink a bit how the server works. Caching is a solution but is is always quite hard to get right. Specially that nodes can get updated data at any time. Not only every X minutes. So long story short, for now I think we can just make all the nodes load in the background when the frontend loads. And then monitor how things evolve with the growing number of node ? username_1: Personally, I have never seen an application loading pieces of data for a small period of time until everything is loaded. This will make the statistics out of sync at the start and then the frontend will be doing alot more re-renders then is needed. Since the request for the nodes on production is only 31.0 KB it doesn't seem like it outweighs the effort here. If we would have like over 10k nodes even then it will still be doable. Maybe we can research if we render the page server side we can reduce overhead? username_2: - Page rendering server side is a bad idea imo, don't think we get anything from that. On the contrary. If we do a query, we'll need to send it to the backend, the backend renders it, and sends it back. So we do a request for every query, thus increase the load on the server by a lot for no real benefit. - Queries on node id/farm id could be handled by the backend. There are already filters for this in the db. This also means that if you share a query, you don't waste the bandwith when it is loaded again on sending useless nodes. All in all, if the request size is that small, I think its doable to load everything. If we want response design, indeed we could load an initial amount of 100 or so nodes, and then in the background load additional nodes. This gives at least a couple of seconds to load the entire thing before a user would actually do a query. And if a user immediatly queries while still streaming, we can always show a loading spinner in the node table imo. username_0: This is what I meant by loading all the nodes. Just load a first reasonable batch once then pull the rest in the background. That allow to have a responsive UI and still load everything. Going further in this idea. It would be possible then refresh each nodes over time based on their last update time or something. Status: Issue closed
flutter/flutter
441442331
Title: VSCode Improve logs Question: username_0: Error logs on Vscode are not as reliable as Android Studio. While i am getting some path location error on Android studio, Vscode doesn't say anything and doesn't launch app on simulator. Just stares. Answers: username_1: Can you give a concrete example or screenshot of what you're seeing in each editor? username_0: Here is a video that shows "Nothing happens part". I can record more if you want. https://youtu.be/NPKEk6IxoYo username_0: I just saw my mistake. I had to start the app from debug panel. But the way i do it in video should be supported. username_1: @username_0 thanks for the video! It's now clear what's going on. You're running the `launch emulator` command. This is intended only to launch the emulator (which in the case it's already running, won't do anything). To start your app, you want to use `F5`, `Debug: Start Debugging`, `Debug: Start without Debugging` or the green Run button on the debug side bar. If you don't already have a device connected, trying to launch will run the `Launch Emulator` command for you, so you can select an emulator (or connect a device) and then it will continue. Hope this makes sense! Status: Issue closed username_0: @username_1 I figured that part as i noted. If Launch Emulator would open with HMR or if the command to do this listed on commands pallet, that would be just **convenient**. It is frustrating to open one thing from somewhere another thing from whole another panel. username_1: @username_0 The command you're running is specifically to launch the emulator. It doesn't make sense for it to launch the app. As mentioned above, there are other commands that do exactly what you want - they start a debug session, and if there is no active device they will automatically run Launch Emulator for you. You don't need to go to a whole other panel, only use the debugging commands: <img width="337" alt="Screenshot 2019-05-09 at 10 04 30 am" src="https://user-images.githubusercontent.com/1078012/57441446-e449e700-7241-11e9-8850-ef468d931b7d.png"> If this does not work for you (or doesn't do as you expect), please open an issue in the [Dart-Code repo](https://github.com/Dart-Code/Dart-Code) and I can take a look. Thanks! username_0: Thanks. That is working. The problem is stupid of me trying to search the command by typing **flutter** first. username_1: Ah yes, that is unfortunate. There's a lot of functionality that's common to all languages and therefore the commands in VS Code are not specific. Launching apps is covered a little on the website at https://flutter.dev/docs/development/tools/vs-code#running-and-debugging along with some extra tips like refactors and assists - it's worth scanning through if you haven't seen it before :-)
spacetelescope/jwst
832909780
Title: Pointing calculation fixes and enhancements Question: username_0: _Issue [JP-1995](https://jira.stsci.edu/browse/JP-1995) was created on JIRA by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=bushouse):_ Implement bug fixes and enhancements to `set_telescope_pointing` script as outlined in JSOCINT-560.<issue_closed> Status: Issue closed
LoneGazebo/Community-Patch-DLL
334694282
Title: Peace offer pulled back. Question: username_0: _1. Mod version (i.e Date - 4/23):_ 6/14 _2. Mod list (if using Vox Populi only, leave blank):_ Info Addict, Quick Turns _3. Error description:_ England rescinds peace offer. _4. Steps to reproduce (optional):_ Ask England for a peace deal. --------------------------- Supporting information: Please note that you can attach .zip files by dragging-and-dropping them. If possible, zip up all supporting data and post that way. 1. Log files (always attach your Logs folder, located at My Documents/My Games/Sid Meier's Civilization 5. Make sure you have enabled logging before experiencing an error! Go here to find out how: http://forums.civfanatics.com/showthread.php?t=487482): 2. Save game (always attach a save that was made a turn before the error; located at My Documents/My Games/Sid Meier's Civilization 5/ModdedSaves): [Dido t358.zip](https://github.com/username_1/Community-Patch-DLL/files/2125669/Dido.t358.zip) 3. CvMiniDump.dmp file (attach if experiencing a game crash. Located at Program Files/Steam/steamapps/common/Sid Meier's Civilization V): 4. Screenshots (optional): Answers: username_1: Could use more information on this. username_0: In essence, the English say they want peace, but won't offer anything, despite being on the cusp of having to capitulate. Earlier you guessed that the problem may arise when the civ offers a city -- it then can't follow through. username_2: I'm confused because I can't load the save game at the moment (screenshots are extremely helpful in these cases) Are you saying your war score is 100, but they won't surrender to you? username_0: The game is long gone, so I'm sorry but no screenshots. The score was around 80. I asked England to surrender. They offered a peace deal, including one of their cities. But the trade screen said "impossible" for their offer, and when I tried to accept, England said "No." The only offer they would accept was a white peace. I had the identical situation occur a few patches back. Your guess was that the offer of a city was the problem. Subsequent to that game, I've had plenty of capitulations, but none involved the offer of a city. Status: Issue closed username_1: _1. Mod version (i.e Date - 4/23):_ 6/14 _2. Mod list (if using Vox Populi only, leave blank):_ Info Addict, Quick Turns _3. Error description:_ England rescinds peace offer. _4. Steps to reproduce (optional):_ Ask England for a peace deal. --------------------------- Supporting information: Please note that you can attach .zip files by dragging-and-dropping them. If possible, zip up all supporting data and post that way. 1. Log files (always attach your Logs folder, located at My Documents/My Games/Sid Meier's Civilization 5. Make sure you have enabled logging before experiencing an error! Go here to find out how: http://forums.civfanatics.com/showthread.php?t=487482): 2. Save game (always attach a save that was made a turn before the error; located at My Documents/My Games/Sid Meier's Civilization 5/ModdedSaves): [Dido t358.zip](https://github.com/username_1/Community-Patch-DLL/files/2125669/Dido.t358.zip) 3. CvMiniDump.dmp file (attach if experiencing a game crash. Located at Program Files/Steam/steamapps/common/Sid Meier's Civilization V): 4. Screenshots (optional): username_3: I just experienced the same issue. In case it's helpful I'll attach a save and logs. How to replicate: Hit "Next Turn". Save: [Peace Treaty Bug.zip](https://github.com/username_1/Community-Patch-DLL/files/2174226/Peace.Treaty.Bug.zip) Logs: [Logs.zip](https://github.com/username_1/Community-Patch-DLL/files/2174227/Logs.zip) Screenshots: ![screenshot 8531](https://user-images.githubusercontent.com/38794241/42425211-ee7ed824-82e7-11e8-82ab-f2a76c9980f6.png) ![screenshot 8533](https://user-images.githubusercontent.com/38794241/42425212-ee9573e0-82e7-11e8-94e4-b0cd09da9b71.png) ![screenshot 8534](https://user-images.githubusercontent.com/38794241/42425213-eea91d3c-82e7-11e8-90d2-574980328e8a.png) ![screenshot 8535](https://user-images.githubusercontent.com/38794241/42425214-eeba0d68-82e7-11e8-84c9-6a2c7138956b.png) ![screenshot 8537](https://user-images.githubusercontent.com/38794241/42425215-eecfddbe-82e7-11e8-8535-a5b7211ec991.png) username_3: Bonus text bug: "We decline ." username_3: Also, "Peace Deal Value" shows up on both the left and right side of the "deal value bar". Could be related. Status: Issue closed
moby/moby
430891787
Title: docker restart problem: exec and inspect the mount info is not consistent Question: username_0: <!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. For more information about reporting issues, see https://github.com/moby/moby/blob/master/CONTRIBUTING.md#reporting-other-issues --------------------------------------------------- GENERAL SUPPORT INFORMATION --------------------------------------------------- The GitHub issue tracker is for bug reports and feature requests. General support for **docker** can be found at the following locations: - Docker Support Forums - https://forums.docker.com - Slack - community.docker.com #general channel - Post a question on StackOverflow, using the Docker tag General support for **moby** can be found at the following locations: - Moby Project Forums - https://forums.mobyproject.org - Slack - community.docker.com #moby-project channel - Post a question on StackOverflow, using the Moby tag --------------------------------------------------- BUG REPORT INFORMATION --------------------------------------------------- Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST --> **Description** <!-- Briefly describe the problem you are having in a few paragraphs. --> **Steps to reproduce the issue:** 1. In my win10 os, there are my docker-compose yaml ![image](https://user-images.githubusercontent.com/7411249/55792998-769c8500-5af4-11e9-9077-4df4b591b2ee.png) 2. reboot system 3. Docker runs fine, but exec enters the nginx container: ![image](https://user-images.githubusercontent.com/7411249/55793157-d85cef00-5af4-11e9-9952-f204f2df71b7.png) and i use `docker inspect nginx`: ![image](https://user-images.githubusercontent.com/7411249/55793191-ead72880-5af4-11e9-885d-54362ff2d9cd.png) it look like mount find!! **Describe the results you received:** enter nginx container and `ls /web` is empty **Describe the results you expected:** i want the nginx mount fine when os reboot [Truncated] Debug Mode (server): true File Descriptors: 46 Goroutines: 65 System Time: 2019-04-09T10:26:30.1305722Z EventsListeners: 1 Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://registry.docker-cn.com/ http://hub-mirror.c.163.com/ https://docker.mirrors.ustc.edu.cn/ Live Restore Enabled: false Product License: Community Engine ``` **Additional environment details (AWS, VirtualBox, physical, etc.):** win10 Educational version
blockchain-etl/ethereum-etl
444952998
Title: No module named 'mythril.ether' Question: username_0: on MacOS, When I run: ethereumetl export_geth_traces --start-block 2380000 --end-block 2380100 --provider-uri ~/Library/Ethereum/geth.ipc --batch-size 100 --output geth_traces.json - There is an error shows up:Symbolic Execution not available: No module named 'mythril.ether' but I did install myehrill and other related package. environment: python: Python 3.7.3 ethereum-etl:1.3.0 eth-abi: 1.3.0 mythril:0.20.5 There is also another problem with the environment: - ethereum-etl 1.3.0 has requirement eth-abi==1.2.0, but mythril 0.20.5 has requirement eth-abi==1.3.0. How to collaborate these three together? Looking for help, thanks! Answers: username_1: You can safely ignore "Symbolic Execution not available' warning. Symbolic execution is not used by ethereum-etl. This warning is output by a dependency library used in ethereum-etl https://github.com/tintinweb/ethereum-dasm/blob/master/ethereum_dasm/evmdasm.py. username_0: @username_1 Thanks for helping. But with this error, I can't export the geth trace in the json file. Here is the whole output: `Symbolic Execution not available: No module named 'mythril.ether' 2019-05-15 23:57:47,681 - ProgressLogger [INFO] - Started work. Items to process: 101. Traceback (most recent call last): File "/usr/local/bin/ethereumetl", line 10, in <module> sys.exit(cli()) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/ethereumetl/cli/export_geth_traces.py", line 55, in export_geth_traces job.run() File "/usr/local/lib/python3.7/site-packages/blockchainetl/jobs/base_job.py", line 30, in run self._end() File "/usr/local/lib/python3.7/site-packages/ethereumetl/jobs/export_geth_traces_job.py", line 79, in _end self.batch_work_executor.shutdown() File "/usr/local/lib/python3.7/site-packages/ethereumetl/executors/batch_work_executor.py", line 67, in shutdown self.executor.shutdown() File "/usr/local/lib/python3.7/site-packages/ethereumetl/executors/fail_safe_executor.py", line 39, in shutdown self._check_completed_futures() File "/usr/local/lib/python3.7/site-packages/ethereumetl/executors/fail_safe_executor.py", line 47, in _check_completed_futures future.result() File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.__get_result() File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/usr/local/lib/python3.7/site-packages/blockchainetl/jobs/base_job.py", line 28, in run self._export() File "/usr/local/lib/python3.7/site-packages/ethereumetl/jobs/export_geth_traces_job.py", line 60, in _export total_items=self.end_block - self.start_block + 1 File "/usr/local/lib/python3.7/site-packages/ethereumetl/executors/batch_work_executor.py", line 50, in execute self.executor.submit(self._fail_safe_execute, work_handler, batch) File "/usr/local/lib/python3.7/site-packages/ethereumetl/executors/fail_safe_executor.py", line 31, in submit self._check_completed_futures() File "/usr/local/lib/python3.7/site-packages/ethereumetl/executors/fail_safe_executor.py", line 47, in _check_completed_futures future.result() File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.__get_result() File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/usr/local/lib/python3.7/site-packages/ethereumetl/executors/batch_work_executor.py", line 55, in _fail_safe_execute work_handler(batch) File "/usr/local/lib/python3.7/site-packages/ethereumetl/jobs/export_geth_traces_job.py", line 65, in _export_batch response = self.batch_web3_provider.make_batch_request(json.dumps(trace_block_rpc)) File "/usr/local/lib/python3.7/site-packages/ethereumetl/thread_local_proxy.py", line 33, in __getattr__ return getattr(self._get_thread_local_delegate(), name) File "/usr/local/lib/python3.7/site-packages/ethereumetl/thread_local_proxy.py", line 37, in _get_thread_local_delegate self._thread_local._delegate = self._delegate_factory() File "/usr/local/lib/python3.7/site-packages/ethereumetl/cli/export_geth_traces.py", line 51, in <lambda> batch_web3_provider=ThreadLocalProxy(lambda: get_provider_from_uri(provider_uri, batch=True)), File "/usr/local/lib/python3.7/site-packages/ethereumetl/providers/auto.py", line 48, in get_provider_from_uri raise ValueError('Unknown uri scheme {}'.format(uri_string)) ValueError: Unknown uri scheme /Users/username_0/Library/Ethereum/geth.ipc` And I check in the terminal, there exists a geth.ipc in the above address. ![image](https://user-images.githubusercontent.com/29030991/57859105-4e253c00-77c0-11e9-81bc-84e4c0be6830.png) Could you please help me look into it? Thanks! username_1: Try using `--provider-uri file://$HOME/Library/Ethereum/geth.ipc` username_0: @username_1 Thanks! It works! Originally I thought this uri is for linux. And I searched the uri ops MacOS and changed it. It works with the uri you provided. Thanks so much for the help! Status: Issue closed
Vitalijus/multiple-file-upload-with-carrierwave-to-cloudinary
768688580
Title: Keep original filename Question: username_0: Hi, thanks for your work. I would like to display the original filename of each photo in the "show" but the filename is replaced by a generated code. I saw that its possible to tell cloudinary that we want to keep the original filename but I tried a lot a things and nothing works. Hope you can help me, Thanks in advance.<issue_closed> Status: Issue closed
langdoc/fennougrica
184686454
Title: About naming things Question: username_0: Some decision has to be made about how to name the subcollections in a systematic way. The original Fenno-Ugrica data remains with the original filenames, but it would be good to arrange each subcollection the way I arranged now the Four Battles books. There are few possible routes to take, probably all with their sensible counterarguments. Now I have used an English name for **Four Battles**, naming the folder as `four_battles`. This works already less well with **In Search of the Mammoth**, although I guess one could just name the subcollection `mammoth` and live with that. Naturally the longer titles in different languages will be in metadata, but we can't use those everywhere. I want all names to be mnemonic and easy to type. I want to use these names later in ways that would allow data being accessed, for example, directly from R with something like: get_corpus(name = 'mammoth') This should give you the corpus. No log in, **no checking the spelling** -- just data streaming to your screen ready to answer serious linguistic questions. So how things are named is directly related to the ease of access different interfaces can provide. If you have to check how corpus name is spelled every time you want to use it, then there already exists a barrier hindering the use. ### General naming conventions - Easy to remember, short and descriptive - No vArIaTiOn in case, possibly all lowercase? - Not longer than few words - Only ascii characters (unfortunately...) - Can be in any language, of course! Please argue against these rules by commenting!
cetic/sparta
1006270374
Title: xtables-monitor: fix rule printing Question: username_0: trace_print_rule does a rule dump. This prints unrelated rules in the same chain. Instead the function should only request the specific handle. Furthermore, flush output buffer afterwards so this plays nice when output isn't a terminal. Answers: username_0: The changes to the code of ipTables do not affect security requirements as it concerns only display.
Updownquark/ObServe
134277489
Title: safe() Question: username_0: I've been thinking about how to make observables, and especially combinations involving observable collections, thread-safe. By which I mean that they only fire events on a single thread at a time, since firing on multiple threads at once could cause problems in listeners if they keep state. All the collection implementations are thread-safe, but the observable and observable value implementations and collection combinations (combine(), flatten(), refresh(), etc.) are not. I was originally thinking to go through and thread-safe them all, but this might be overkill, since many applications will not need the thread-safety. So I was thinking of adding a safe() method (I have already done this for Observables and their extensions). Then I thought that if anything is thread-safe, it could just return itself from this method. This seems like it might be a good idea, except that most transformations do not affect thread safety, so they would need to basically re-create themselves using their roots' safe() instances. So the vast majority of the observable classes would need to implement the method. Almost all of these would be fairly trivial, but it's a lot of monkey work, and it's something that needs to be remembered for every future implementation. In addition, a safe(Lock) method (already in Observables) might be useful, but I'm not sure yet. The place it would be most useful is in combinations, but then the combinations could just use the default safe() implementation. So maybe this is not needed. I need to think more on this before I bloat my code unnecessarily. Answers: username_0: I mentioned in a later commit, it's not the safe methods that are breaking the tests. Tests are fixed on head. This is done unless I decide to rethink it. No testing on it yet though. Status: Issue closed
DonJayamanne/gitHistoryVSCode
866491216
Title: Merge commit into current branch doesn't seem to be working as expected Question: username_0: It's likely, in fact probable, that I'm not understanding this functionality but I'm looking at Git History and I select one of the commits and I click 'More' to reveal the dropdown list containing 'Cherry pick this ..., Checkout ..., Select, Revert, ... Merge this ..., etc. I selected the option to 'Merge this (commitId) commit into current branch' and then it pops-up another dropdown listing the full commitId along a couple of branches. I selected the commitId and I see the confirmation asking if I really want to merge this commit and I select 'Yes'. However, when I do this it looks like it's merging ALL of my commits not just the one I selected. If I instead select the 'Cherry pick this (commitId) commit into current branch' option then it works as expected and only brings that particular commit into my branch. Am I missing something? Answers: username_1: I can't the merge feature to work at all myself. No matter whether I try to commit branch A to B or B to A, nothing seems to happen.
DirectMyFile/syscall
70751358
Title: If the binary callback (as and any other binary data) used not immediately it is required to keep it alive all time that it used Question: username_0: ```dart static void bindKey(key, Function handler) { LibReadline.init(); var functionType = getBinaryType("rl_command_func_t"); var callback = new BinaryCallback(functionType, (args) { if (handler is ReadlineCommandRegularFunction) { return handler(args[0], args[1]); } else { return handler(); } }); checkSysCallResult(invoke("readline::rl_bind_key", [key is String ? key.codeUnitAt(0) : key, callback.functionCode])); } ``` After the last use (in `checkSysCallResult`) of the binary data (`BinaryCallback`) it will be freed by the garbage collector at the first opportunity. Binary function code also would be invalid (deallocated). I do not know how the "readline" works but if you want create a "long life" binary callback you should return it back and store it in a safe place. Local variables not a good place. If it used only in the function body then you can use method `keepAlive()` ```dart var data = allocData(); someFunc(data); // above we not use data directly // but some code may use data in physical memory // Without this data would be freed // Keep it alive keepAlive(data); ``` Answers: username_1: Ok. I'll do that.
librecaptcha/lc-core
673533666
Title: GC of solved captchas Question: username_0: * Change `solved` column to an integer that counts number of times it was solved * After every N captchas are created, delete the captchas that have been solved more than M times. N and M need to be configurable, but for now, we can hard-code them to 1000 and 10 respectively. Answers: username_1: - Add a timestamp column to the captcha map table - Update the timestamp with the current time whenever a captcha is served - Increment M when a captcha is solved - While selecting a new captcha to be served, the `solved` count should be less than M - For GC the `solved` count should be more than M and the last served time should be less than the current time - time to solve captcha - For the `answer` endpoint, it should accept the solution only if the last served time is less than the current time - time to solve - When the time to solve exceeds or incase of a captcha is garbage collected it should return an error status and ask to request for a new captcha username_0: Closed via #52 Status: Issue closed
tommoor/emojione-picker
141815677
Title: Example not working Question: username_0: Sorry but can you describe how to use the example? I can't use the example. It shows nothing when I open index.html in browser. Status: Issue closed Answers: username_1: @username_0 you need to run `npm run preview` to generate the compiled file for preview, hope this helps. https://github.com/username_1/emojione-picker#development
nsqio/pynsq
363663658
Title: SyntaxError: invalid syntax Question: username_0: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/user/a/venv/lib/python3.7/site-packages/nsq/__init__.py", line 29 from .async import AsyncConn ^ SyntaxError: invalid syntax ``` python 3.7.x Answers: username_1: fixed in #212 (but there have been no tagged releases since then) Status: Issue closed
esphome/issues
777054582
Title: ESP32 crashes when LEDc is used Question: username_0: **Operating environment/Installation (Hass.io/Docker/pip/etc.):** I am using docker (`:latest` and `:dev`). **ESP (ESP32/ESP8266, Board/Sonoff):** ESP32 on a ESP32dev-Clone from AZDelivery **ESPHome version (latest production, beta, dev branch)** `:latest` and `:dev` from dockerhub (pulled today). **Affected component:** https://esphome.io/components/output/ledc.html **Description of problem:** When I configure an output to use LEDc, the esp crashes immediately on use of that output. My goal is to have a status LED for a light (on when the light is on and faded when the light is off). That is why I am changing the output in the `on_turn_on` of a switch. **Problem-relevant YAML-configuration entries:** ```yaml esphome: name: crash platform: ESP32 board: esp32dev wifi: ssid: "XXX" password: "<PASSWORD>" switch: - platform: gpio id: light pin: 32 on_turn_on: then: - output.turn_on: gpio_2 on_turn_off: then: - output.turn_off: gpio_2 output: - platform: ledc id: gpio_2 pin: GPIO2 debug: logger: level: DEBUG ``` **Logs (if applicable):** ``` [15:53:49]ets Jun 8 2016 00:22:57 [15:53:49] [Truncated] [15:53:49]load:0x40080400,len:5828 [15:53:49]entry 0x400806ac [15:53:49][I][logger:166]: Log initialized [15:53:49][I][app:029]: Running through setup()... [15:53:49][C][switch.gpio:011]: Setting up GPIO Switch 'light'... [15:53:49][D][switch:025]: 'light' Turning OFF. [15:53:49][D][switch:045]: 'light': Sending state OFF [15:53:49]/home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/freertos/queue.c:1442 (xQueueGenericReceive)- assert failed! [15:53:49]abort() was called at PC 0x40088b65 on core 1 [15:53:49] [15:53:49]Backtrace: 0x4008c730:0x3ffb1b70 0x4008c961:0x3ffb1b90 0x40088b65:0x3ffb1bb0 0x400d9227:0x3ffb1bf0 0x400d1ae1:0x3ffb1c10 0x400d2033:0x3ffb1c40 0x400d204f:0x3ffb1c60 0x401499a5:0x3ffb1c80 0x40149c95:0x3ffb1ca0 0x40149c73:0x3ffb1cc0 0x40149d23:0x3ffb1ce0 0x400d73c7:0x3ffb1d00 0x400d2355:0x3ffb1d20 0x400d1a39:0x3ffb1d50 0x400d2257:0x3ffb1da0 0x400d17a5:0x3ffb1dc0 0x400d17ce:0x3ffb1df0 0x40149ac1:0x3ffb1e10 0x40149b45:0x3ffb1e30 0x400d5c25:0x3ffb1e50 0x400d7325:0x3ffb1ea0 0x400d9d03:0x3ffb1fb0 0x40088e79:0x3ffb1fd0 [15:53:49] [15:53:49]Rebooting... ``` **Additional information and things you've tried:** I have googled the problem of an `abort()` in `queue.c` of the IDF, but it seems that that issue can have a number of reasons. Most people seem to have that problem when using an uninitialized mutex... <!-- LEAVE THIS LINE AS-IS AND DON'T DELETE IT, OTHERWISE THE ISSUE WILL BE CLOSED AUTOMATICALLY. --> Answers: username_0: I have the feeling that this is happening because the `switch` is configured and then tries to use the `output` before the `output` is configured... username_0: That hunch seems to have been correct. I hacked something together where `write_state` checks if `setup` has been called and stores the state for `setup` to set it after initialization: https://github.com/username_0/esphome/tree/fix-uninitialized-ledc username_1: I believe I encountered this defect today so Ill add my configuration and log for more data points. My goal is to control a solenoid valve (as a garage door lock) where the switch would first activate the solenoid with a high PWM, then transition to a lower PWM to reduce the heat build up in the coil while the solenoid is activated for a long period of time. I tried to play around with pointing the switch to a dummy output (i.e. junk_output), but no matter what, when the switch's on_turn_on automation was uncommented, I would get stack traces. ```yaml substitutions: deviceName: test_garage_lock prettyDeviceName: Test Garage Lock esphome: name: $deviceName platform: ESP32 board: featheresp32 wifi: ssid: "XXX" password: "<PASSWORD>" debug: logger: level: DEBUG # Status LED # status_led: # pin: # number: GPIO13 # inverted: yes output: - platform: ledc id: lock_one_output pin: GPIO13 frequency: 50 Hz min_power: 5.0% # 5% at 50Hz is 1mS (20mS cycles) max_power: 10.0% # 10% at 50Hz is 2mS (20mS cycles) - platform: gpio pin: GPIO27 id: junk_output switch: - platform: output id: test_lock_one name: Test Lock One output: junk_output icon: mdi:lock on_turn_on: then: - output.turn_on: lock_one_output - delay: 2s - output.turn_off: lock_one_output binary_sensor: - platform: gpio pin: number: GPIO14 inverted: True name: Test Lock One Status [Truncated] (inlined by) esphome::switch_::SwitchTurnOnTrigger::SwitchTurnOnTrigger(esphome::switch_::Switch*)::{lambda(bool)#1}::operator()(bool) const at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src\esphome/components/switch/automation.h:55 (inlined by) std::_Function_handler<void (bool), esphome::switch_::SwitchTurnOnTrigger::SwitchTurnOnTrigger(esphome::switch_::Switch*)::{lambda(bool)#1}>::_M_invoke(std::_Any_data const&, bool&&) at c:\users\jingleheimer\.platformio\packages\toolchain-xtensa32\xtensa-esp32-elf\include\c++\5.2.0/functional:1871←[0m ←[33mWARNING Decoded 0x400d990d: std::function<void (bool)>::operator()(bool) const at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src\esphome\components\switch/switch.cpp:53 (inlined by) esphome::CallbackManager<void (bool)>::call(bool) at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src/esphome/core/helpers.h:211 (inlined by) esphome::switch_::Switch::publish_state(bool) at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src\esphome\components\switch/switch.cpp:46←[0m ←[33mWARNING Decoded 0x400d8efa: esphome::output::OutputSwitch::write_state(bool) at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src\esphome\components\output\switch/output_switch.cpp:27←[0m ←[33mWARNING Decoded 0x400d97e3: esphome::switch_::Switch::turn_on() at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src\esphome\components\switch/switch.cpp:53←[0m ←[33mWARNING Decoded 0x400d8ec2: esphome::output::OutputSwitch::setup() at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src\esphome\components\output\switch/output_switch.cpp:16←[0m ←[33mWARNING Decoded 0x400d8ed6: non-virtual thunk to esphome::output::OutputSwitch::setup()←[0m ←[33mWARNING Decoded 0x4015ffdd: esphome::Component::call_setup() at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src/esphome/core/component.cpp:111←[0m ←[33mWARNING Decoded 0x401600c5: esphome::Component::call() at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src/esphome/core/component.cpp:111←[0m ←[33mWARNING Decoded 0x400dd821: esphome::Application::setup() at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src\esphome\core/application.cpp:38←[0m ←[33mWARNING Decoded 0x400dfd8d: setup() at S:\Athena\Projects\Electronics & Software\IoT\ESPHome-Configuration\test_garage_lock/src/main.cpp:250←[0m ←[33mWARNING Decoded 0x400eaad7: loopTask(void*) at C:\Users\Jingleheimer\.platformio\packages\framework-arduinoespressif32\cores\esp32/main.cpp:14←[0m ←[33mWARNING Decoded 0x40088e7d: vPortTaskWrapper at /home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/freertos/port.c:355 (discriminator 1)←[0m [15:47:14] [15:47:14]Rebooting... Bootloop... ``` username_2: I ran in a similar problem when using Remote transmitter + FastLEDlight at the same time , my issue was closed unsolved recently. However username_0's fix worked for me. Hopefully someone can thinker out a permanent fix. Maybe it would be possible to initialize the variables in the order of how the yaml is created or include an "initialization" order option. username_0: I have updated my branch at https://github.com/username_0/esphome/tree/fix-uninitialized-ledc. username_0: https://github.com/esphome/esphome/pull/1732
SokolovAndrey1/Hashing
882676401
Title: Collisions in dictionary performance tests for perfect hash tables Question: username_0: Due to similar lines in the file, a lot of collisions arise when building an perfect hash table. Answers: username_0: Work if disable check for collisions in sub table `(library/include/perfect_hash/sub_table.h)`: ``` if collision then construct hash table from the begining! if (_cells[hash].key != maxValue) { isCollisions = true; break; } ```
hassio-addons/addon-nginx-proxy-manager
446827490
Title: No posibility to disable TLS1.0/TLS1.1 Question: username_0: TLS1.0 is not secure anymore. It seems there is no way to disable specific TLS versions Answers: username_1: TLS1.0 hasn't been secure for a long time:) however this should have been resolved with the 0.2.0 release, where the upstream updated the template for SSL settings. You may have to recreate the sites due to the design of the application. Status: Issue closed
alexa/alexa-skills-kit-sdk-for-python
381758558
Title: management client get_list returns error Question: username_0: <!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [ ] Documentation issue or request [ ] Other... Please describe: </code></pre> <!--- Provide a general summary of the issue in the Title above --> ## Expected Behavior <!--- If you're describing a bug, tell us what should happen --> service_client_fact.get_list_management_service().get_list('listID', 'active') should return a list of items. <!--- If you're suggesting a change/improvement, tell us how it should work --> ## Current Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> Am getting error below. argument of type 'NoneType' is not iterable: SerializationException <!--- Include full errors, uncaught exceptions, stack traces, and relevant logs --> <!--- If service responses are relevant, please include any --> <!--- If suggesting a change/improvement, explain the difference from current behavior --> ## Possible Solution ``` // Not required, but suggest a fix/reason for the bug, // or ideas how to implement the addition or change ``` ## Steps to Reproduce (for bugs) ``` // Provide a self-contained, concise snippet of code from ask_sdk_core.skill_builder import CustomSkillBuilder from ask_sdk_core.api_client import DefaultApiClient from ask_sdk_core.dispatch_components import AbstractRequestHandler from ask_sdk_core.dispatch_components import AbstractExceptionHandler from ask_sdk_core.utils import is_request_type, is_intent_name from ask_sdk_model.ui import AskForPermissionsConsentCard from ask_sdk_model.services import ServiceException from ask_sdk_model.services import list_management sb = CustomSkillBuilder(api_client=DefaultApiClient()) permissions = ["read::alexa:household:list","write::alexa:household:list"] class StartListsHandler(AbstractRequestHandler): def can_handle(self, handler_input): return is_intent_name("StartListsIntent")(handler_input) [Truncated] sb.add_request_handler(StartListsHandler()) lambda_handler = sb.lambda_handler() // For more complex issues provide a repo with the smallest sample that reproduces the bug // Including business logic or unrelated code makes diagnosis more difficult ``` ## Context <!--- How has this issue affected you? What are you trying to accomplish? --> I am trying a simple list read, create program. Currently stuck on reading items using the client service. <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## Your Environment <!--- Include as many relevant details about the environment where the bug was discovered --> * ASK SDK for Python used: 1.3.0 * Operating System and version: Using AWS Lambda ## Python version info * Python version used for development: 3.6 Answers: username_1: Hey @username_0 , sorry for responding late. This seems to be an issue on the `DefaultSerializer` module, when deserializing a `null` value. I am working on the fix and plan to release new version of the SDK with the fix soon. Status: Issue closed username_1: Hey @username_0 , the issue has been fixed in PR #48. I will update the issue once a new release happens. Thanks once again for letting us know!! username_1: Release `1.4.0` contains the fix. Please update to the latest SDK to get the fix. username_0: Great! Thanks.
dotnet/roslyn-analyzers
118215430
Title: Port FxCop rule CA1721: PropertyNamesShouldNotMatchGetMethods Question: username_0: **Title:** Property names should not match get methods **Description:** The name of a public or protected member starts with ""Get"" and otherwise matches the name of a public or protected property. ""Get"" methods and properties should have names that clearly distinguish their function. **Proposed analyzer:** Microsoft.ApiDesignGuidelines **Notes:**<issue_closed> Status: Issue closed
OpenITI/Annotation
704855771
Title: Text tagged: 0852IbnHajarCasqalani.RafcCisr.Shamela0012724-ara1.completed Question: username_0: 00#VERS#CLENGTH##: 502400 00#VERS#LENGTH###: 121950 00#VERS#URI######: 0852IbnHajarCasqalani.RafcCisr.Shamela0012724-ara1.completed 80#VERS#BASED####: http://www.worldcat.org/oclc/949486002 80#VERS#COLLATED#: http://www.worldcat.org/oclc/949486002 80#VERS#LINKS####: https://waqfeya.com/book.php?bid=3970 90#VERS#ANNOTATOR: SP Loynes 90#VERS#COMMENT##: Good correspondence, no obvious errors. Biographical tags were added manually. 90#VERS#DATE#####: 2020-09-19 90#VERS#ISSUES###: NO_MAJOR_ISSUES Answers: username_1: https://github.com/OpenITI/Annotation/issues/2170
emberjs/ember.js
308136937
Title: Document and support {{let}} helper Question: username_0: ## Tasks - [ ] Turn [ember-let](https://github.com/thefrontside/ember-let) into a polyfill. - [ ] Document `{{let}}` in the Guides. [Issue](https://github.com/emberjs/guides/issues/2308). Answers: username_1: Does your first point mean to put some sort of check like (https://github.com/username_3/ember-getowner-polyfill/blob/master/index.js#L19)[here] somewhere (https://github.com/thefrontside/ember-let/blob/master/index.js#L42)[here]? username_2: The built-in `{{let}}` helper is [documented in the API docs](https://emberjs.com/api/ember/3.1/classes/Ember.Templates.helpers/methods/let?anchor=let). I haven't gotten it working in an Ember 3.1.0 app, but I'm not sure why yet. username_3: It was only added in 3.2, the API docs are missing our feature flagging `@category` pragma perhaps? username_0: Apologies! I have opened a PR correcting the problem 😅. username_4: Forgot to mention that the polyfill is completed and published to npm! https://github.com/username_4/ember-let-polyfill username_5: @username_4 does this mean we can close this issue? Or should we add a checkbox to do something additional with documentation? username_4: @username_5 Yep, the only outstanding thing is that I need to get the polyfill listed correctly on ember-observer. I've requested a correction from them so hopefully it'll be fixed soon. Status: Issue closed
anthonydresser/testissues
666809772
Title: A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader : Incorrect name as 'In collapse' is announced for a 'Show more/less' control for screen reader users. Question: username_0: **"[Check out Accessibility Insights! ](https://nam06.safelinks.protection.outlook.com/?url=https://accessibilityinsights.io/&data=02%7c01%7cv-manai%40microsoft.com%7cb67b2c4b646d4f9561a208d6f4b5c39b%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c636965458847260936&sdata=T26HQfSGOlnuRQdX%2ByXk%2B2bxqgwFvCIVfuboZUWidYY%3D&reserved=0)- Identify accessibility bugs before check-in and make bug fixing faster and easier.”** GitHubTags:-A11y_AzureDataStudio;-July_2020;-A11yMAS;-A11yTCS;-SQL Azure Data Studio;-Benchmark;-MAC;-Screenreader;-VoiceOver;-A11ySev2;-Benchmark;-MAS1.3.1;-MAS4.1.2;-MAS4.2.1; --- Environment Details: Application Name: Azure Data Studio Application Version: 1.21.0-insider Commit: e<PASSWORD> Date: 2020-07-24T09:28:31.172Z VS Code: 1.48.0 Electron: 9.1.0 Chrome: 83.0.4103.122 Node.js: 12.14.1 V8: 8.3.110.13-electron.0 OS: Darwin x64 19.6.0 Operating system: macOS Catalina (Version 10.15.6 (19G73) Screen Reader: VoiceOver MAS References: MAS1.3.1, MAS4.1.2, MAS4.2.1 --- Repro Steps: 1. Launch Azure Data Studio Insiders application. 2. Connect to server. 3. Double click on connected server or right click on it & select manage option to open the Dashboard. 4. Navigate to Home under Dashboard & hit enter. 5. Start screen reader, the navigate to "^" control and listen to the announcement been made for this control. --- Actual: When screen reader users navigate to the "^" control, it name is announced as 'Collapsed selected' which is incorrect. --- Expected: The "^" control should be provided with name as 'Show Details" with expand/collapse state for screen reader users so that users are able to identify it's sate on interacting with it. --- User Impact: If proper name and state of the control is not announced to the scree reader users the they will not understand how to interact with that control. --- Attachment link for Reference:
dart-lang/site-www
297305873
Title: Effective Dart Design page is not loading CSS Question: username_0: To reproduce: 1) Open https://www.dartlang.org/guides/language/effective-dart/design Expected: 2) Loads normally. Actual: 2) no has CSS ![image](https://user-images.githubusercontent.com/6378241/36236983-1139e0f8-11ae-11e8-8372-73574ba3dc1f.png) ![image](https://user-images.githubusercontent.com/6378241/36237019-4239a378-11ae-11e8-87ac-32ae63a2017d.png) I can see the network request to load the CSS has 404ed in the console. Request URL: https://www.dartlang.org/assets/style-49b15a079291b882826a4cf279523e83fa67e31017c18d97bca2d7254a977ea7.css Request Method: GET Status Code: 404 Was able to reproduce in incognito and other computers. Was not able to reproduce on any other Effective Dart pages besides "Design." Answers: username_1: If you hover over the tool icon, what build info do you see? cc @kwalrath @kevmoo username_0: Tool icon: "Site built on 2018/02/09 20:40 UTC" Per #456, yes, it seems like CDN issue. I and someone else -- both located in Southern California -- can reproduce this issue in Chrome on three different computers, in incognito and with cleared caches. I asked two other people in Northern California and Washington state, respectively, if they could reproduce, which they could not, even with cleared caches. username_1: Are you still seeing the problem? (I've had to deploy an update, and a redeploy usually makes these sorts of problems go away.) username_0: Looks to be working now. "Site built on 2018/02/15 06:32 PST" ![image](https://user-images.githubusercontent.com/6378241/36266281-f5c1267e-1225-11e8-81f4-0054a1e47889.png) username_1: Thanks! Status: Issue closed
Kamalisk/arkhamdb
744178377
Title: XSS exploit for the view Question: username_0: I've found a way to inject javascript into the view and it was by saving the script in description of the edit deck. ![image](https://user-images.githubusercontent.com/21199130/99306236-e91e2c80-2822-11eb-9659-63b801c5da41.png) I made a quick script on this page to show the logged user first deck list to show that it could become dangerous. It could also made request to other server such as google so information could be sent to others. Here's the deck list example. https://arkhamdb.com/deck/view/1079700 Answers: username_1: thanks for the report. I guess that has been there for a while. For now I have disabled the html stuff from deck views. Decklists are escaped properly, will have to update decks to use the same, but for now the descriptions show up as plain text. username_0: Super thanks for the great work ! username_2: Ok, so this is why my notes turned into html Yeah, nicely spottet and quickly disabled Status: Issue closed username_1: used dompurify to clean up the description and re-enabled it. Should suffice for now unless I change how it works.
syssi/xiaomi_cooker
797727980
Title: Please fit for Xiaomi YLIH02CM IH 1S 3L chunmi.cooker.k1pro1 Question: username_0: i have Xiaomi YLIH02CM IH 1S 3L chunmi.cooker.k1pro1 i user your component but it it does not get data from the rice cooker please fit it thank you vermuch sr. my English very bad ![New Bitmap Image (2)](https://user-images.githubusercontent.com/42457105/106387722-ad503780-640d-11eb-8787-f1c46821d9f0.jpg)
pantsbuild/pants
31268158
Title: "examples" or "dogfood" repo for sample code, ad-hoc experimenting Question: username_0: We have sample code to demonstrate how to set up a source tree for Pants, yay. The sample code sits uneasily in the same source tree as Pants itself. We'd like to migrate the sample code out to another repo. We want to preserve ease of: - testing local pants hacks on sample code - ci.sh ? - docs "include" sample code See the discussion: https://groups.google.com/forum/#!searchin/pants-devel/commons-ish/pants-devel/-AWR8Apifwo/gOkHDNz4eKwJ Some excerpts from that discussion: # <NAME>: ...a repo we use to test pants, and that could also be our example repo. ... we could use this pants testing repo for... - Does bundle produce the expected output? - Does jar produce a jar that looks like the intended one? - Did publish to a local directory do things correctly? ...black-box testing...compare what it actually produced against what's expected...supplement the unit tests. It could also be used in our docs as our examples repo. # <NAME>: I like the dogfood repo idea, rather than cluttering pantsbuild/pants up with java/scala source, even frivolous example source. Also, it's a more realistic example because it's a separate repo. pantsbuild/pants is special in all sorts of ways (e.g., you can run pants from source). Answers: username_1: A couple years later...we ended up creating an example repository setup that demonstrates different Pants functionality. For now, we have an example of V2 Python support; soon, we'll have an example of creating a custom plugin. https://pants.readme.io/docs/example-repos Status: Issue closed
Koheron/koheron-sdk
176700073
Title: RP fonctionne avec DHCP mais pas en local Question: username_0: en local (sans DHCP), toutes les leds s'allument jusqu'à la 7 inclue et impossible de pinger 192.168.1.255 (ou d'autres). Le PC est bien configuré en 192.168.1.101 et 255.255.255.0. En revanche le DHCP fonctionne. Answers: username_1: Si je comprend bien, tu as essayé de connecter la Red Pitaya en direct avec deux méthodes: * avec une IP statique. * avec une IP dynamique fournie par un serveur DHCP qui tourne sur ton PC. C'est bien ça ? username_0: - DHCP: la RP est connectée et déclarée sur le réseau général. Les leds montrent bien le dernier nombre de l'IP. On peut y accéder sans problème (ping, ssh,...) - Local: la RP est connectée directement à un port ethernet d'un PC configuré en 192.168.1.101. Les leds sont toutes allumées (255) et impossible de se connecter en SSH ou ping. username_1: Les LEDs affichent 255 par défaut lorsque la carte n'a pas encore trouvé son addresse IP (https://github.com/Koheron/koheron-sdk/blob/master/drivers/common/common.hpp#L83) Pour se connecter à la Red Pitaya en IP statique, il faut modifier le contenu du fichier `/etc/network/interfaces` sur la Red Pitaya : ## Configuration par défaut (DHCP): ``` allow-hotplug eth0 # DHCP configuration iface eth0 inet dhcp post-up ntpdate -u ntp.u-psud.fr post-up systemctl start koheron-server-init ``` ## Configuration IP statique (addresse IP = `192.168.1.100`): ``` allow-hotplug eth0 # Static IP iface eth0 inet static address 192.168.1.100 gateway 192.168.1.0 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 post-up ntpdate -u ntp.u-psud.fr post-up systemctl start koheron-server-init ``` Status: Issue closed
deg0nz/MMM-PublicTransportBerlin
299971417
Title: vbb.transport.rest is deprecated Question: username_0: Hey! This is the author of [vbb-rest](https://github.com/username_0/vbb-rest), deployed at `vbb.transport.rest`. Nice to see that [you (seem to) use my service](https://github.com/username_1/MMM-PublicTransportBerlin/blob/101eb2ebcb88fad1164950b854eb8e9f6b2a8056/README.md#how-to-get-the-stationid)! I'm here to let you know that **The API & format deployed at `vbb.transport.rest` is deprecated. Please use the new API, deployed at `2.vbb.transport.rest`.** You can find docs for the new format in the [`vbb-rest` repo](https://github.com/username_0/vbb-rest). Status: Issue closed Answers: username_1: Thanks for the hint! I alway appreciate your nice depcrecation notes. :) So thank you for that! Since commit a2cb835 the module uses `[email protected]` so the new endpoint uses `2.vbb.transport.rest`. Closing. username_0: I've shut it off now. username_0: **I have set up [`v5.vbb.transport.rest`](https://v5.vbb.transport.rest/), the successor of `3.vbb.transport.rest`.** As usual, unfortunately the response format has changed slightly (to the format of [`hafas-client@5`](https://github.com/public-transport/hafas-client/tree/5), make sure to check its [migration guide](https://github.com/public-transport/hafas-client/blob/5/docs/migrating-to-5.md)). I will keep `3.vbb.transport.rest` running for a while, and announce its shutdown [via RSS](https://transport.rest/feed.xml) before. username_0: I have deprecated `3.vbb.transport.rest` and will shut it off in a month. Please migrate to [`v5.vbb.transport.rest`](https://v5.vbb.transport.rest/). username_1: @username_0 Thanks for the reminder! I migrated to hafas 5.x with commit 658b48c5114f6fc2d6d50174aa55719297642c93 username_0: Just noticed that the deprecation warning is somewhat irrelevant for this project because, unlike a public API endpoint, `hafas-client` versions won't be shut off. You can of course continue using them, but they won't receive bug requests (except explicit requests for backports) and won't have all the new features. username_1: Oh, Ok... I didn't look into the code of the HAFAS Client and the Profiles. I thought that when I use the corresponding profile, I automatically use the new endpoints with new `hafas-client` versions. So the updated endpoints and APIs are only in use for the specific clients (e.g. `vbb-hafas` or `bvg-hafas`)? I already thought about re-migrating to VBB or BVG, because it seems that the VBB-Outage problems are more or less solved by now. I migrated to `hafas-client`to have a more reliable data source back in the day username_0: Do we have a misunderstanding here? There are a lot of counterintuitive versioning schemes involved here: - `v5.vbb.transport.rest` is [`vbb-rest#5`](https://github.com/username_0/vbb-rest/tree/5), which uses `vbb-hafas@7`, which uses `hafas-client@5` with the VBB profile. - `3.vbb.transport.rest` is [`vbb-rest#3`](https://github.com/username_0/vbb-rest/tree/3), which uses `vbb-hafas@6`, which uses `hafas-client@4` with the VBB profile. - `v5.bvg.transport.rest` is [`bvg-rest#5`](https://github.com/username_0/bvg-rest/tree/5), which uses `bvg-hafas@3`, which uses `hafas-client@5` with the BVG profile. - `2.bvg.transport.rest` is [`bvg-rest#2`](https://github.com/username_0/bvg-rest/tree/2), which uses `bvg-hafas@2`, which uses `hafas-client@4` with the DB profile. The gist of it is that the `v5.*.transport.rest` use `hafas-client@5` underneath, whereas the previous `transport.rest` APIs (namely `3.vbb` & `2.bvg`) are deprecated now and use `hafas-client@4` underneath. **Since you use `hafas-client` directly *anyways*, technically, you didn't have to upgrade to `hafas-client@5`**, because `hafas-client@4` will keep working until HaCon breaks the HAFAS endpoints. Now that you have upgraded though, you benefit from bugfixes and additional features in `hafas-client`. username_0: You don't need to do anything here, except to check that your code works with the `hafas-client@5` response structures. username_1: Alright. Okay, that is good to know. Was a litte bit confusing tbh... Thanks fo clarifying! username_0: Yeah, the reason why I bumped the `transport.rest` APIs from `3`/`2` to `v5` is to make it less confusing: From now on, their version will follow the `hafas-client` version.
quay/claircore
518560838
Title: alpine 3.4 parser Question: username_0: libvuln reports a parsing error for alpine 3.4 ``` ERR error from updater: updater alpine-main-v3.4-updater failed to update: failed to parse the fetched vulnerability database: yaml: unmarshal errors: line 37: mapping key "2.4.27-r1" already defined at line 27 component=libvuln ``` fix to sec db has been opened: https://github.com/alpinelinux/alpine-secdb/pull/5 Waiting for comments or merger Answers: username_1: Thanks -- as alpine-secdb is autogenerated from the APKBUILD files in `aports`, it would be better to just regenerate. (And, having checked locally, doing so appears to remove this duplication.) I'll try and get that done. username_0: Issue is resolved: `3:57PM INF successfully updated the vulnstore component=update-controller interval=30m0s name=alpine-main-v3.4-updater` Thanks @username_1 Status: Issue closed username_0: libvuln reports a parsing error for alpine 3.4 ``` ERR error from updater: updater alpine-main-v3.4-updater failed to update: failed to parse the fetched vulnerability database: yaml: unmarshal errors: line 37: mapping key "2.4.27-r1" already defined at line 27 component=libvuln ``` fix to sec db has been opened: https://github.com/alpinelinux/alpine-secdb/pull/5 Waiting for comments or merger username_0: @username_1 Hey there, it looks like we are experiencing this issue again with alpine 3.7 security database. ``` alpine-main-v3.7-updater: failed to parse the fetched vulnerability database: yaml: unmarshal errors:\n line 707: mapping key \"1.8.3-r1\" already defined at line 705\' ``` would you be able to assist once again? username_0: @username_1 This issue is resolved by utilizing the json definitions instead of yaml. Status: Issue closed
mattermost/mattermost-mobile
602535870
Title: All channels are showing as read only since v1.30.0 Question: username_0: #### Summary Since version 1.30.0 was installed on my phone, all Mattermost channels are shown as read-only "This channel is read-only". I have confirmed with others that this is not unique to my phone. #### Environment Information - Device Name: Xiaomi Mi 9 Lite - OS Version: 10 QKQ1.190828.002 - Mattermost App Version: 1.30.0 - Mattermost Server Version: 4.4.0 Answers: username_1: @username_0 Would you be open to upgrading your server version to a more recent version? Our supported server versions include v5.19.0 or later. username_0: @username_1 Thanks for the quick response. Unfortunately I don't own the server side installation, so that's not possible at this moment. I'll forward the suggestion to the administrator. I've downgraded to v1.29.0 which resolved it for now. Might still be worth to check which particular changes would've broke this for older server versions? My hunch is it's related to pull request [#3904](https://github.com/mattermost/mattermost-mobile/commit/fbd7fedfbc5078138f453ae3ed20965ae9a73980) username_2: In our company have Mattermost Server 4.2.0 and Mattermost Mobile 1.30.0 (similar that @username_0 mentioned), and my team have the same problem. The only solution that works are downgrade the app to 1.29.0. What server version do you think will work properly with the lastest version of Mattermost mobile app? username_1: We support versions v5.19+, - https://github.com/mattermost/mattermost-mobile#mattermost-mobile - https://github.com/mattermost/mattermost-mobile/blob/master/CHANGELOG.md#1300-release username_2: Oh, reading this, I see that our Mattermost Server is very outdated hahaha Sorry for the inconvenience, and thanks a lot for your help. username_1: Mobile v1.30.1 dot release has been released and will be available in the app stores soon. Status: Issue closed