repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
xitongsys/parquet-go
976812642
Title: No option to `omitempty` when not using pointers Question: username_0: I cannot convert the struct that I have to use pointers as it has over 1000 fields now and is very tightly integrated into the codebase. The following json example shows what I am trying to achieve: ```golang package main import ( "encoding/json" "io/ioutil" ) type Salary struct { Basic, Tax, Pension float64 `json:",omitempty"` } type Employee struct { FirstName, LastName, Email string `json:",omitempty"` Age int MonthlySalary []Salary `json:",omitempty"` } func main() { data := Employee{ Email: "<EMAIL>", MonthlySalary: []Salary{ { Basic: 15000.00 } } } file, _ := json.MarshalIndent(data, "", " ") _ = ioutil.WriteFile("test.json", file, 0o644) } ``` Now the output json looks like: ```json { "Email": "<EMAIL>", "Age": 0, "MonthlySalary": [ { "Basic": 15000 } ] } ``` As you can see, the item in the struct that have the omit empty tag and that are not assigned do no appear in the json, i.e. `Tax` `Pension`. But on the other hand `Age` does not have this tag and hence it is still included in the json. This is problematic as all fields in the struct are assigned memory when this golang library writes to parquet- so if you have a big struct that is only sparsely populated it will still take the full amount of memory, i.e. as if all fields contain data. It is a bigger problem when the file is read again as there is no way of know if the value that was put in the parquet file was the empty value or it is was just not assigned, i.e. for an int, was it zero or was it not set? I am happy to help implement an omitempty tag for this library if I can convince you of the value of having it. Answers: username_1: There was (is?) a long discussion in go community about `omitempty`, zero value, and JSON Marshal/Unmarshal, I'd say if you can make it in go with JSON (which I believe you cannot), then it can be adopted in Parquet. username_2: For parquet, you should use OPTIONAL field for omitempty, which is a pointer in Go struct. There is no simple way to use dynamic filed struct in Go. Status: Issue closed username_0: I think I explained why I cannot use pointers- was it not clear?
cypress-io/cypress
765270065
Title: 南京品茶上门品茶学生外围上课 Question: username_0: 南京品茶上门▋▋薇87O9.55I8唯一靠谱▋▋记凉徽群阜哪有酒店荤,桑拿服务全套SPA,水疗会所保健,品茶学生工作室洋妞如今,越来越多的人都加入了健身的行列。胖的人喜欢练瘦;太瘦的人希望练出肌肉,变得更壮;而体重正常的,则希望保持身材、人也能变得更有力量、更加健康。不过,健身虽好,但也有许多的注意事项。关于健身,你需要了解几个点:选择合适的运动强度每个人所能承受的运动强度是不一样的,健身新手与老手适合的运动项目和运动时间也不一样。对于刚刚开始健身的人,需要循序渐进,慢慢增加运动强度。当觉得累了的时候,就应当及时停下来,然后休息好了之后再继续,否则会出现运动拉伤等情况。热身、拉伸很重要运动前热身、运动后南京品茶上门https://github.com/sjslhs6<issue_closed> Status: Issue closed
keystonejs/keystone
431620868
Title: externalLink in `nav` doesn't work without being an array and renders a create button and count Question: username_0: <!-- We're trying to keep the issue tracker unpolluted. Please ask questions and support requests on: * https://stackoverflow.com/questions/tagged/keystonejs Join the KeystoneJS Slack for discussion with the community & contributors: * https://launchpass.com/keystonejs --> ### Expected behavior <!-- If you're describing a bug, tell us what should happen --> <!-- If you're suggesting a change/improvement, tell us how it should work --> According to docs https://github.com/keystonejs/keystone/blob/master/docs/documentation/Configuration/AdminUI-Options.md#navigation I can add an external link. I would expect to see a tile that goes to the link, but does not contain list things like items count and an add button. ### Actual/Current behavior <!-- If you're describing a bug, tell us what happens instead of the expected behavior --> <!-- If you're suggesting a change/improvement, explain the difference from current behavior --> You can only add an external link by putting the object into an array. Because this code: https://github.com/keystonejs/keystone/blob/master/lib/core/initNav.js#L32 will not wrap the Object with an array, and https://github.com/keystonejs/keystone/blob/master/lib/core/initNav.js#L40 _.map will actually map over the Object itself, thus crashing on line 42. When rendering the tile, https://github.com/keystonejs/keystone/blob/master/admin/client/App/screens/Home/components/Lists.js#L18 this code will see no `listData` (because there is no list for it) and fallback `isNoCreate` to `false`, thus rendering a create button. The count is always rendered, which doesn't really make sense with a externalLink. ### Steps to reproduce the actual/current behavior <!-- If you're describing a bug, tell us what steps to take to reproduce your bug --> <!-- If you're suggesting a change/improvement, explain how to reproduce the current behavior --> The example will break: ``` keystone.set('nav', { 'posts': ['posts', 'post-categories'], 'galleries': 'galleries', 'enquiries': 'enquiries', 'users': 'users', 'externalLink': { label: 'Keystone', key: 'keystone', path: 'http://keystonejs.com/' } }); ``` It builds with the array fix and will show count and add button: ``` keystone.set('nav', { 'posts': ['posts', 'post-categories'], 'galleries': 'galleries', 'enquiries': 'enquiries', 'users': 'users', 'externalLink': [{ label: 'Keystone', key: 'keystone', path: 'http://keystonejs.com/' }] }); ``` ### Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version | ---------------- | ------- | Keystone | 4.0.0 | Node.js | 10.15.1 | Browser | latest Chrome Answers: username_1: Hi @username_0. Did you working on it or I can check and create PR? username_0: @username_1 i actually did work on it a while ago, now you mention it. I got some stuff fixed for this, but I wasn't too happy with the look. A block without a count and button will be way smaller... I'll set up a PR today, you can see
NineWorlds/serenity-android
542386839
Title: Allow switching of Users for Emby Question: username_0: Need to allow the ability to switch to a different user with Emby. 1. Include user switching icon in the tool bar at the top of the screen. 2. Clients need to return whether they support multiple users. 3. If client supports it then show the toolbar option, otherwise hide it. 4. When clicking on the option, the User Login Screen should be shown.<issue_closed> Status: Issue closed
angular/angular
891800351
Title: error on ng serve angular v12 Question: username_0: # Bug Report ### Affected Package @angular/cli ### Is this a regression? No ### Description I upgraded from angular 11 to angular 12. When I run the below common node --max_old_space_size=8192 ./node_modules/@angular/cli/bin/ng serve the is error show below without any clue... ## Exception or Error <pre><code> Warning: C:/Users/kokke/Documents/HLY/HLYUX/src/app/services/speech-synthesizer.service.ts is part of the TypeScript compilation but it's unused. Add only entry points to the 'files' or 'include' properties in your tsconfig. Warning: C:/Users/kokke/Documents/HLY/HLYUX/src/app/services/style-manager.ts is part of the TypeScript compilation but it's unused. Add only entry points to the 'files' or 'include' properties in your tsconfig. Warning: C:/Users/kokke/Documents/HLY/HLYUX/src/environments/environment.hmr.ts is part of the TypeScript compilation but it's unused. Add only entry points to the 'files' or 'include' properties in your tsconfig. Warning: C:/Users/kokke/Documents/HLY/HLYUX/src/environments/environment.prod.ts is part of the TypeScript compilation but it's unused. Add only entry points to the 'files' or 'include' properties in your tsconfig. Error: undefined:5:112729: property missing ':' </code></pre> ## Your Environment **Angular Version:** <pre><code> Angular CLI: 12.0.0 Node: 14.16.1 Package Manager: npm 6.14.12 OS: win32 x64 Angular: ... Package Version ------------------------------------------------------ @angular-devkit/architect 0.1200.0 (cli-only) @angular-devkit/core 12.0.0 (cli-only) @angular-devkit/schematics 12.0.0 (cli-only) @schematics/angular 12.0.0 (cli-only) [Truncated] "@types/googlemaps": "3.43.3", "@types/jasmine": "3.7.4", "@types/jasminewd2": "2.0.9", "@types/jwt-decode": "2.2.1", "@types/lodash": "4.14.169", "@types/node": "15.0.3", "codelyzer": "6.0.2", "jasmine-core": "3.7.1", "jasmine-spec-reporter": "7.0.0", "karma": "6.3.2", "karma-chrome-launcher": "3.1.0", "karma-coverage-istanbul-reporter": "3.0.3", "karma-jasmine": "4.0.1", "karma-jasmine-html-reporter": "1.6.0", "protractor": "7.0.0", "ts-node": "9.1.1", "tslib": "2.2.0", "typescript": "4.2.4", "webpack-bundle-analyzer": "4.4.1" }
reportportal/agent-net-nunit
834865498
Title: Tests are not finishing with NUnit 3.13.1 and dotnet 5 Question: username_0: Tests are not finishing with NUnit 3.13.1 and dotnet 5. Report portal logs contain the following errors: ``` ReportPortalListener Error: 0 : 10:09:47.7933722 : 1-testhost : ReportPortal exception was thrown. System.ArgumentException: Cannot parse empty value. at ReportPortal.Shared.Execution.Metadata.MetaAttribute.Parse(String value) at ReportPortal.NUnitExtension.ReportPortalListener.FinishTest(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Test.cs:line 88 ReportPortalListener Error: 0 : 10:09:47.8002702 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8026000 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8037242 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8048896 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8060875 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8107531 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.LaunchReporter.Finish(FinishLaunchRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.FinishRun(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Launch.cs:line 110 ``` Answers: username_1: @username_0 do you think the issue is related to version of nunit or .net framework? I cannot reproduce it on my side using the same set of versions. This issue looks like to be simple resolvable, but please share simple test project to reproduce it. username_0: @username_1 i'm pretty sure it is related to dotnet 5. We have other projects that are still on dotnetcore3.1 using the latest nunit and they work fine. Also, I downgraded to my projects to dotnetcore3.1 and everything started working again. username_1: @username_0 based on agent code I can imagine that nunit engine provides empty test category. I was able to reproduce this issue in following way ```csharp [Caterory("")] public void Test() {} ``` Looks like you create empty category for test. Please confirm whether this is your case. Definitely this issue should be fixed on agent side, I would like to understand if the fix will help to resolve your issue. Please get deep diver what exactly causes the error. username_0: I don't have any empty categories but i do have a custom attribute that extends category attribute like this: ``` [AttributeUsage(AttributeTargets.All, AllowMultiple = false)] public class OldNameAttribute : CategoryAttribute { public OldNameAttribute(string oldTestName) : base(oldTestName) { } } ``` also have some tests that are using multiple `Category` atributes. username_1: Yeah, this is the same. Please try to find the place where you put empty `oldTestName` string value. Hmm, but you mentioned it's related to .net 5. NUnit engine produces empty category somehow, please highlight how. If it's possible to share nunit's result xml file, it will help to identify problematic test. Or, you can set `ReportPortal_TraceLevel` environment variable to `Verbose` (don't forget to restart VS or any other child processes to take an effect) and provide log files. username_0: For some reason I can't get this to reproduce at the moment. I will close for now and reopen when/if I can. I'll then post the verbose logging you recommended. Status: Issue closed username_1: Tests are not finishing with NUnit 3.13.1 and dotnet 5. Report portal logs contain the following errors: ``` ReportPortalListener Error: 0 : 10:09:47.7933722 : 1-testhost : ReportPortal exception was thrown. System.ArgumentException: Cannot parse empty value. at ReportPortal.Shared.Execution.Metadata.MetaAttribute.Parse(String value) at ReportPortal.NUnitExtension.ReportPortalListener.FinishTest(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Test.cs:line 88 ReportPortalListener Error: 0 : 10:09:47.8002702 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8026000 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8037242 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8048896 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8060875 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.TestReporter.Finish(FinishTestItemRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.<FinishSuite>b__45_0(String __id, FinishTestItemRequest __finishSuiteRequest, String __report, String __parentstacktrace) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 191 at ReportPortal.NUnitExtension.ReportPortalListener.FinishSuite(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Suite.cs:line 222 ReportPortalListener Error: 0 : 10:09:47.8107531 : 1-testhost : ReportPortal exception was thrown. System.InsufficientExecutionStackException: Some of child test item(s) are not scheduled to finish yet. at ReportPortal.Shared.Reporter.LaunchReporter.Finish(FinishLaunchRequest request) at ReportPortal.NUnitExtension.ReportPortalListener.FinishRun(String report) in C:\projects\agent-net-nunit\src\ReportPortal.NUnitExtension\ReportPortalListener.Launch.cs:line 110 ``` username_1: @username_0 I am targeting the fix to v4.1.0 Please let me know if this issue is still reproducible and noisy for you. Status: Issue closed
networkx/networkx
367007562
Title: Subgraph failure over unicode node names on python 2. Question: username_0: Minimal failure example on python 2.7: ``` import networkx as nx G = nx.Graph() a = u"甲" b = u"乙" c = u"丙" G.add_nodes_from([a,b,c]) G.add_edge(a,b) S = G.subgraph([a,b]) S.number_of_edges() ``` Got: <details> <summary>Traceback</summary> ``` --------------------------------------------------------------------------- UnicodeEncodeError Traceback (most recent call last) <ipython-input-1-3d3a0780213e> in <module>() 10 G.add_edge(a,b) 11 S = G.subgraph([a,b]) ---> 12 S.number_of_edges() /home/hcwu/.local/lib/python2.7/site-packages/networkx/classes/graph.pyc in number_of_edges(self, u, v) 1706 """ 1707 if u is None: -> 1708 return int(self.size()) 1709 if v in self._adj[u]: 1710 return 1 /home/hcwu/.local/lib/python2.7/site-packages/networkx/classes/graph.pyc in size(self, weight) 1652 6.0 1653 """ -> 1654 s = sum(d for v, d in self.degree(weight=weight)) 1655 # If `weight` is None, the sum of the degrees is guaranteed to be 1656 # even, so we can perform integer division and hence return an /home/hcwu/.local/lib/python2.7/site-packages/networkx/classes/graph.pyc in <genexpr>(***failed resolving arguments***) 1652 6.0 1653 """ -> 1654 s = sum(d for v, d in self.degree(weight=weight)) 1655 # If `weight` is None, the sum of the degrees is guaranteed to be 1656 # even, so we can perform integer division and hence return an /home/hcwu/.local/lib/python2.7/site-packages/networkx/classes/reportviews.pyc in __iter__(self) 445 for n in self._nodes: 446 nbrs = self._succ[n] --> 447 yield (n, len(nbrs) + (n in nbrs)) 448 else: 449 for n in self._nodes: /usr/lib64/python2.7/_abcoll.pyc in __contains__(self, key) 386 def __contains__(self, key): 387 try: --> 388 self[key] 389 except KeyError: 390 return False /home/hcwu/.local/lib/python2.7/site-packages/networkx/classes/coreviews.pyc in __getitem__(self, key) 297 if key in self._atlas and self.NODE_OK(key): 298 return self._atlas[key] --> 299 raise KeyError("Key {} not found".format(key)) 300 301 def copy(self): UnicodeEncodeError: 'ascii' codec can't encode character u'\u4e59' in position 0: ordinal not in range(128) ``` </details> Answers: username_1: NetworkX 2.2 (recently released) is the last version of NetworkX to support Python 2. We are not currently planning any additional releases for Python 2. Status: Issue closed
storybookjs/storybook
939016632
Title: The "Copy Canvas Link" leaves out the base url and only gives the iframe information making it useless Question: username_0: **Describe the bug** It appears the "Copy Canvas Link" on the top right side of the Canvas tab adds to the clipboard only part of the url, it only gives you the iframe part of the url but misses the main base url. For example if I had my story on this url: `http://localhost:6006/?path=/story/example-button--large` And then I clicked on the "Copy Canvas Link" on the top right (The clipboard on the furthest right): ![image](https://user-images.githubusercontent.com/26545361/124787630-9fe82d80-df16-11eb-9fd5-f7d5ad224ddc.png) Then when you go over to a new tab and try to copy and paste what you just "Copied" you will get: `iframe.html?id=example-button--large&args=` When it should be instead `http://localhost:6006/iframe.html?id=example-button--large&args=&viewMode=story` You are essentially missing the base part of the url. I'm not sure if this is intentional or not. Is this a bug or is this something on my end? If so is there a working solution or a wide to "hide" the "Copy Canvas Link" option on the top right? Can you easily see this by doing `create-react-app new_project` and then `npx sb init`. You can reproduce this issue. Thanks! Answers: username_1: Hello I'm new and would like to take this up if it's available. This will get me started to explore more! username_2: @username_1 Thank you for helping make Storybook better! 🙏 Please check out [how to contribute](https://storybook.js.org/docs/react/contribute/how-to-contribute) in our docs and feel free to ask questions in `#contributing` in [our Discord](https://discord.gg/storybook). username_1: Sounds good! Thank you. In the meantime, can this be assigned to me? username_1: The PR is up and ready! 😄 username_3: Is this still outstanding? I am interested in looking into this fix if we still have not solved it yet. username_4: I haven't seen any activity for a while, so I made a pull request. I am new to contributing, so please let me know if there is anything I missed.
yiisoft/yii2
197400900
Title: Question: how to get module from components? Question: username_0: Dear yii elders and geeks :) What are good ways to get module from inside of components of the module? I mean this one obviously isn't: ```php class MyComponent { public function doIt() { $module = Yii::$app->getModule('my-module'); } } ``` Because of spreading `'my-module'` all around the code is not good. Using a constant like `MyModule::NAME` is not quite enough because it is not flexible. E.g. for module controllers it's not a problem because of `$module` property populated at request processing. Answers: username_1: could be done via DI container getSingleton()? username_2: Module instance can be obtained using its class name: ``` MyModule::getInstance(); ``` If controller is already created it can be obtained via its instance: ```php $currentModule = Yii::$app->controller->module; ``` Explicit binding for the module inside component will not be supported as it introduces cycle reference. Status: Issue closed username_3: @username_0, regarding your question and your criteria, the cleanest way is to construct the Component during the init() call of the Module and to inject the Module in the Component's $module property. This prevents you from having to use the Module alias `my-module` or to use the global Yii scope.
facebook/react-native
520630466
Title: Question: simulator not install app ? help me ! It's too bad Question: username_0: why ?? It's too bad i just run ``` react-native init App yarn react-native run-ios ``` Normal access to port 8081 in the browser But, in simulator, the app is never installed, why exactly?
Azure/azure-sdk-for-net
1129800687
Title: Missing point about Sessions? Question: username_0: This page does not show an example to code a trigger with Session enabled. In version 4.x public async Task Run([ServiceBusTrigger("%QueueName%", Connection = "QueueConnectionString", IsSessionsEnabled = true)] Message message, IMessageSession messageSession, ILogger logger) How to migrate it to new 5.2 with Azure.Messanging.Servicebus 7.5.x? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dab2626a-9579-fe2b-f11b-0ffec8666fcf * Version Independent ID: 9f48d2bc-eda0-57e8-1ecd-a2489f1cb862 * Content: [Azure WebJobs Service Bus client library for .NET - Azure for .NET Developers](https://docs.microsoft.com/en-us/dotnet/api/overview/azure/microsoft.azure.webjobs.extensions.servicebus-readme-pre) * Content Source: [api/overview/azure/microsoft.azure.webjobs.extensions.servicebus-readme-pre.md](https://github.com/Azure/azure-docs-sdk-dotnet/blob/master/api/overview/azure/microsoft.azure.webjobs.extensions.servicebus-readme-pre.md) * Service: **webjobs** * Product: **azure** * Technology: **azure** * GitHub Login: @ramya-rao-a * Microsoft Alias: **ramyar** Answers: username_1: Thank you for your feedback. Tagging and routing to the team member best able to assist. username_2: Related issue - https://github.com/Azure/azure-sdk-for-net/issues/24642 Status: Issue closed
TYPO3-Documentation/TYPO3CMS-Tutorial-SitePackage
322613140
Title: Describe/mention language file Question: username_0: If there are any cases where labels/texts can be moved into a language file under `/Resources/Private/Language/`, this should be clearly pointed out and used/suggested/recommended. Section "Fluid Templates" seems to be an appropriate spot, because it deals with sub-folders of `Resources/`, but a different section can also be considered.<issue_closed> Status: Issue closed
schmittjoh/serializer
393466916
Title: Compile error Declaration of [...] must be compatible with [...] Question: username_0: "php": ">=5.5.9", "adelplace/onesignal-bundle": "^1.1", "doctrine/doctrine-bundle": "^1.6", "doctrine/orm": "^2.5", "friendsofsymfony/rest-bundle": "dev-master", "friendsofsymfony/user-bundle": "^2.0", "incenteev/composer-parameter-handler": "^2.0", "jms/serializer-bundle": "2.0.1", "mashape/unirest-php": "^3.0", "nc/elephantio-bundle": "^2.1", "nelmio/api-doc-bundle": "3.3", "paragonie/random_compat": "2.0.9", "php-http/guzzle6-adapter": "^1.1", "phpunit/phpunit": "^7.4", "redjanym/fcm-bundle": "^1.1", "sensio/distribution-bundle": "^5.0.19", "sensio/framework-extra-bundle": "^5.0.0", "symfony/monolog-bundle": "^3.1.0", "symfony/polyfill-apcu": "^1.0", "symfony/swiftmailer-bundle": "^2.6.4", "symfony/symfony": "3.4.*", "symfony/webpack-encore-bundle": "^1.0", "twig/twig": "^1.0||^2.0" }, "require-dev" : { "sensio/generator-bundle" : "^3.0", "symfony/phpunit-bridge" : "^3.0" }, My API works on my local machine but got this issue on my test server. (Both with php 7.2.x) Any idea ? Answers: username_1: the production PHP version and dev version should be the same. If is not so, your composer file might contain libraries that can not run on your production env (as in this case, jms/serialzer v2.0 requires at lease php 7.2 thats why works locally and not on production). username_2: Estimado <NAME>, como actualizaste ... me ocurre lo mismo ... en en php-v tengo 7.2 y en phpinfo() tengo 7.1 username_0: @username_2 You might have multiple php instances. [Look at this link](https://superuser.com/a/971895) username_3: https://support.root360.cloud/support/tickets/192569
TitanNanoDE/ApplicationFrame
321747960
Title: [IndexedDB] Increase test converage Question: username_0: Come up with more tests for the IndexedDB driver to get the test coverage to 100%. Answers: username_1: Would like to write some test for the IndexedDB Driver, would probably need some help getting started. And some ressources on IDB I should read upon. username_0: @username_1 IndexedDB is currently only covered by some tests for the ServiceWorker cache and doesn't have any tests of it's own. There is a indexedDB shim for tests though and you should be able to use it. You can consult the [wiki](https://github.com/username_0DE/ApplicationFrame/wiki/Module:-IndexedDB::index) and [MDN](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API) on how the driver works and how the indexedDB API it self works.
Automattic/mongoose
105027508
Title: findById promise has different behaviour if there is a callback function Question: username_0: If I invoke findById like this, I get an object in the resolution of the promise ``` User.findById( '55eae90a337edf5319b25861' ).then(function(user){ console.log(user); // user is an object } ); ``` But if I invoke findById with a callback function, I get an array in the resolution of the promise ``` User.findById( '55eae90a337edf5319b25861', function(){} ).then(function(user){ console.log(user); // user is an array } ); ``` The promise should always resolve an object. Status: Issue closed Answers: username_1: This is indicative of a minor issue which was fixed in 55d2384. The bigger problem, though, is that you absolutely should not specify both a callback and call `.then()` on a query. This will execute the query twice by design.
denizyuret/Knet.jl
163071404
Title: Build arbitrary graph in Knet? Question: username_0: In Mocha.jl, I can build arbitrary graph using ```Concat Layer``` and ```Split Layer``` like this [issue](https://github.com/pluskid/Mocha.jl/issues/204). How can I implement this in Knet? Answers: username_1: Knet8 supports models implemented in regular Julia, including array indexing and concatenation commands which should take care of this issue. Please reopen if you have further problems. Status: Issue closed
atom-community/markdown-preview-plus
321092856
Title: blockMathSeparators option seems to be ignored Question: username_0: I've stumbled across the following trying to configure markdown-preview-plus for my needs: If I have the following in my `.atom/config.cson` (this is just for testing): ``` "markdown-preview-plus": blockMathSeparators: [ "\\[" "\\]" "||" "||" ] ``` then I would expect ``` $$x^2$$ ``` to not be rendered in the live preview, but ``` || x^2 || ``` to be. In fact the opposite happens, suggesting that the `blockMathSeparators` option is not read at all, and that a hardcoded value is used in all cases. The reason I want to modify `blockMathSeparators` is that wrapping all my display math in `$$` or `\[` is not a workable solution for writing a longer form document, as pandoc's export to latex then does not compile. Ideally naked environments like `\begin{equation}` -- `\end{equation}` I'm using the current release, markdown-preview-plus v.3.0.1. Answers: username_1: blockMathSeparators only works with markdown-it parser/renderer. Pandoc does not allow configuring math delimiters (also `\[` doesn't work in Pandoc, only `$` for inline and `$$` for display math are allowed) username_0: Ah, I see. Pandoc does however recognize naked math environments – is this something you plan to support at some point ? username_1: Pandoc doesn't actually "recognize" naked math environments. What it does is passes those through to LaTeX. For any other output formats, those will be mercilessly discarded. If you want to label/reference your equations in Pandoc, you might be interested in https://github.com/username_1/pandoc-crossref username_0: Right, I had missed that. Thanks for the tip ! Status: Issue closed
sfackler/rust-postgres
651064317
Title: Postgres enums within schema Question: username_0: The derived `FromSql` instance appears to fail when translating a Rust type to a Postgres type qualified by a schema name. For example, this works fine: ```rust #[derive(Clone, Copy, Debug, PartialEq, Eq, Deserialize, Serialize, FromSql, ToSql)] #[postgres(name = "service")] pub enum Service { Facebook, Google, Twitter, } ``` But this does not, the only difference being the "users." schema prefix: ```rust #[derive(Clone, Copy, Debug, PartialEq, Eq, Deserialize, Serialize, FromSql, ToSql)] #[postgres(name = "users.service")] pub enum Service { Facebook, Google, Twitter, } ``` The Postgres types in each case would correspond, i.e.: ```pgsql CREATE TYPE [users.]service AS ENUM ( 'Facebook', 'Google', 'Twitter' ); ``` Perhaps though there is a way to use the qualified Postgres type that I've overlooked? Answers: username_1: The derive currently just ignores the schema of the type entirely I believe. Does the first code sample (with no schema qualification) not work with the type defined in a non-default schema? username_0: That works, yes, thanks! The behavior is perhaps confusing if the schema is accepted in some cases but produces errors in others. For example, `pg_mapper` accepts the schema. If it's too much to have consistency across the related packages it may be helpful to add a note to the docs for `#[postgres]`. username_1: Yeah I think it is a bit weird that the logic currently ignores the schema entirely. I think it'd definitely make sense to handle the case where the type name is schema-qualified and add a check for the schema name at that point. Would you be interested in making a PR? username_0: Yes but I am new to Rust and databases and quite time constrained to if this is waiting on me it may be a while. username_1: No worries!
watertap-org/watertap
1149556813
Title: Renaming "solute" in zero-order models and omitting water recovery from input data Question: username_0: Not all components will be solutes, so the naming of `removal_frac_mass_solute` can be changed to `removal_frac_mass_comp`. From this, `recovery_frac_mass_H2O` would no longer be needed in YAML files to load into unit models. Instead it could be calculate as 1 - `removal_frac_mass_comp['H2O']`. Changing this would alter our WT3 migration work and could slow down our effort for WT3 and AMO model implementation, so it might be best to consider addressing this issue once models are in place. See the conversation in PR #340 for more details on the discussion. Answers: username_1: I think there are actually two part to this: 1. Should water be treated the same way as all other species - i.e. have a removal instead of recovery? At the moment water needs its own material balances as they are formatted (slightly) differently, which in turn means that the remaining balances need to be indexed over everything that isn't water which we currently call "solutes" 2. If we stay with treating water differently, is there a better name than "solutes" - we already have entrained/suspended solids in some cases which are not strictly solutes, and we may see other liquids or entrained gases in future. Either way, there is a bit of work in this, as any renaming would affect all the YAML files, as well as the core code for the unit models and all the tests. username_2: It sounds like we something we should do *after* the unit models are in place, given how many open PRs we have at the moment.
packer-community/winrmcp
53976807
Title: Cannot read WinRM info on Windows 2008 R2 Question: username_0: As noted [here](https://github.com/packer-community/packer-windows-plugins/issues/3) and [here](https://github.com/packer-community/packer-windows-plugins/issues/14), an administrator cannot access WinRM info on Windows 2008 R2. ⇒ bin/winrmcp -addr=localhost:5536 -info <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="5" Machine="win-2008-r2"><f:Message>Access is denied. </f:Message></f:WSManFault> Error number: -2147024891 0x80070005 Access is denied. Client Addr: localhost:5536 Auth: Basic User: packer WinRM Config MaxEnvelopeSizeKB: 0 MaxTimeoutMS: 0 Service/MaxConcurrentOperations: 0 Service/MaxConcurrentOperationsPerUser: 0 Service/MaxConnections: 0 Winrs/MaxConcurrentUsers: 0 Winrs/MaxProcessesPerShell: 0 Winrs/MaxMemoryPerShellMB: 0 Winrs/MaxShellsPerUser: 0 Answers: username_0: The info query is also used prior to copying a file in order to determine how many chunks it can send to the remote before needing to discard the current shell and initiate a new one. username_1: confirmed independently using winrm commandline tool ``` winrm -hostname 172.16.111.135 -username "vagrant" -password "<PASSWORD>" "winrm get winrm/config -format:xml" <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="5" Machine="VAGRANT-51A0FAR"><f:Message>Access is denied. </f:Message></f:WSManFault> Error number: -2147024891 0x80070005 Access is denied. ``` username_1: How about allowing for the value for MaxEnvelopeSizeKB as a config parameter? username_0: I'm not ready to take in arguments like that. The machine itself is the source of truth, and if we can't read it out the I'd like it degrade gracefully. Commit c29edbd044eea8a33a103bca707f7fe7528bc0fd is a small step in that direction, and will work around the *winrmcp copy* problem for a standard 2008 R2 installation. username_1: I played around with wsman, and TrustedHosts settings to get powershell to return the values, but none worked. username_0: I have it working in branch [winrmcp/elevated_info_query](https://github.com/packer-community/winrmcp/tree/elevated_info_query) but it's hardcoded to vagrant/vagrant and it's not ready for prime-time without tests and a refactor or three. :see_no_evil: username_0: I've worked around this by 1. allowing `max-ops-per-shell` to be provided to the cli/lib and 2. setting the default to 15 to make Windows 2008 R2 happy. Status: Issue closed
JuliaGPU/GPUCompiler.jl
937656587
Title: parameters from ExprTools and LLVM Question: username_0: Both ExprTools.jl and LLVM.jl export parameters, so I got the following warning: ``` WARNING: both ExprTools and LLVM export "parameters"; uses of it in module GPUCompiler must be qualified ``` Answers: username_1: dup, and already fixed. Please update your packages. Status: Issue closed
marmelab/react-admin
1004379186
Title: MIssing styles for RichTextField content Question: username_0: **What you were expecting:** When using the `align right`, `align center`, `justify` and the indents buttons in the `<RichTextInput />` component, the content in the resulting `RichTextField` would have these styles applied. **What happened instead:** When using theses buttons, the resulting content is still aligned to the left. Note that the content does have the proper css classes of `ql-align-right`, `ql-align-center`, `ql-align-justify` and `ql-indent-<X>` where `<X>` is the number of indents applied to the content. **Steps to reproduce:** - Add some paragraphs in a `<RichTextInput />` - Style them with the aforementionned buttons - See the results in a `<RichTextField />` **Related code:** Sorry, no repro code, it's a quick bug report. But ! Here's the CSS code we manually inserted to adjust the presentation, it may prove useful to fix the problem (it's SASS code but pretty straightforward): ``` sass .ql-align-center { text-align: center; } .ql-align-right { text-align: right; } .ql-align-justify { text-align: justify; } @for $indent from 1 to 8 { .ql-indent-#{$indent} { padding-left: #{$indent}em; } } ``` **Environment** * React-admin version: "^3.17.3" * React version: "^17.0.1" * Browser: All Hope this helps ! (mentioning @dorian-ubisoft for the record) Answers: username_1: Not really a bug as we don't include those actions by default, so it is indeed your job to update the styles if you need them. By the way, have you noticed we released [ra-richtext-tiptap](https://github.com/marmelab/ra-richtext-tiptap)? This will probably become the new richtext in v4 Status: Issue closed
docker-solr/docker-solr
588981516
Title: Release 8.5 Question: username_0: I volunteer to be RM Answers: username_0: Hmm, after merging #296 I continue with the release, but get stuch at ``` ./generate-stackbrew-library.sh | tee ../new-versions  ✔  ⚙  10228  11:01:37 failed fetching repo "https://github.com/docker-library/official-images/raw/master/library/openjdk:11-jre-stretch" tag not found in manifest for "openjdk": "11-jre-stretch" # this file is generated via https://github.com/docker-solr/docker-solr/blob/96477231f35e50cc9bd8cb0c8e9b004e841a126a/generate-stackbrew-library.sh ``` So openjdk no longer maintains the stretch variant. Does that mean we are *forced* to go to buster @username_2 ? username_0: My proposal is to change FROM=openjdk:11-jre-stretch --> FROM=openjdk:11-jre FROM=openjdk:8-jre-stretch --> FROM=openjdk:8-jre That is the way we do for the slim variants already, and it will currently resolve to `buster` since that is what upstream openjdk uses. username_1: Well... this is even more complicated, as actually 'full image' is strech but slim is using: openjdk:11-jre-slim which currently (according to: https://hub.docker.com/_/openjdk?tab=tags&page=1&name=11-jre-slim ) resolves to... buster. username_0: Exactly, it was inconsistent for the 8.4 release. So removing the pinning and instead rely on what upstream has chosen as deault (i.e. what you would get if you `docker pull openjdk:11-jre` or `docker pull openjdk:11-jre-slim`) we get less to maintain. I see the problematic part too, that users pulling `solr:8.4` will after this change get auto upgraded to buster next time they pull from hub. But it's is the same with Java minor updates as well. According to https://wiki.debian.org/DebianReleases 'jessie' is EOL 2020 (guess this summer) and the new `stable` release buster is already more than 9 months old and should be a better choice for our users. username_1: That sounds good to me, shall I make a quick PR with this update or is there some longer discussion / approval by @username_2 needed? username_0: I'll await @username_2 's response as I'm not that familiar with the generate-stackbrew thing or exactly the requirements of official-images team. We may be allowed to use stretch and work around it but I'd like Martijn's judgement on this. The change is 2 lines so I can commit that as part of the release preparation upstream. username_2: It's a script that generates the [manifest](https://github.com/docker-library/official-images/blob/master/library/solr) for the library. It just lists the things that they will build and tag, on specific architectures. There is some overlap between this, and the `tools/` scripts docker-solr itself uses to maintain the dockerfiles, in particular [update.sh](https://github.com/docker-solr/docker-solr/blob/master/tools/update.sh), in that they both have to figure out what builds exist. But basically, get `update.sh` working first, and generate the right Dockerfiles in our directories. Then after that generate-stackbrew should just do the right thing; just check its output diff [looks plausible](https://github.com/docker-solr/docker-solr/blob/master/update.md#update-the-official-images-repository) step. username_0: @username_2 can you please give me maintainer access so I can push to our official-images fork? username_2: I've added the Maintainers group as maintainers, and you're in that group. username_0: 8.5 is now available in hub. Thanks for the help with the release everyone! Status: Issue closed
Azure/azure-sdk-for-python
419627821
Title: disk_size_gb no longer taken into account for OSDisk ? Question: username_0: Hello! We are creating CentOS 7 VMs through the Azure Python SDK. The disks are unmanaged. We want to specify the size for the OS disk (root disk) at creation time. We do _not_ want to expand / resize the disk later, or to add data disks. Until some months ago, this worked by specifying the 'disk_size_gb' parameter for OSDisk: ``` compute.virtual_machines.create_or_update( resource_group, vm_name, VirtualMachine(..., storageProfile=StorageProfile( osDisk=OSDisk(..., disk_size_gb=60 ) ) ) ``` And we did have 60 GB for the root disk (after growing the file system within cloud-init). But right now, we make the same call, do not get any errors, and the disk size is only 30 GB, the size of the image. The disk_size_gb parameter seems to get ignored. Answers: username_1: @yugangw-msft @username_2 any ideas? username_2: @username_3 username_3: @username_0 @username_1 @username_2 Hmm this is strange, the CLI's vm create allows me to specify the os disk size I desire...: ``` (cli-venv) [03:08 PM] Work-Mac:azure-cli tosin$ az vm create -n tosin-centos-60 \ -g ova-test --image centos --os-disk-size-gb 62 (cli-venv) [03:11 PM] Work-Mac:azure-cli tosin$ az vm show -n tosin-centos-60 -g ova-test --query "{Name:name, OsDisk:storageProfile.osDisk}" { "Name": "tosin-centos-60", "OsDisk": { "caching": "ReadWrite", "createOption": "FromImage", "diffDiskSettings": null, "diskSizeGb": 62, "encryptionSettings": null, "image": null, "managedDisk": { "id": "/subscriptions/00977cdb-163f-435f-9c32-39ec8ae61f4d/resourceGroups/ova-test/providers/Microsoft.Compute/disks/tosin-centos-60_OsDisk_1_c7d795f6fa2541009853d3d71b563054", "resourceGroup": "ova-test", "storageAccountType": "Premium_LRS" }, "name": "tosin-centos-60_OsDisk_1_c7d795f6fa2541009853d3d71b563054", "osType": "Linux", "vhd": null, "writeAcceleratorEnabled": null } } ``` The service seems to respect the vm configuration I use. FYI: vm create uses an ARM template. This might either be a programmatic error or a bug with the sdk or service for some specific scenario. Status: Issue closed username_0: Ahem. Very sorry! Our growfs scripts needed to be more robust. The disk itself was all right, and disk_size_gb is correctly handled. Sorry for the noise! username_1: No problem @username_0 if you encounter any other issue, feel free to ask ;)
TK-IT/web
194004300
Title: Old urls are not working - should they? Question: username_0: Facebook "old stuff on this date" showed me this URL - but it doesn't work anymore. [https://www.taagekammeret.dk/Billeder/ShowPicture.php?folder=12-0809%252F92-Loebesedler&pic_file=2akv19.jpg&pic_count=43&mainfoldernr=12](https://www.taagekammeret.dk/Billeder/ShowPicture.php?folder=12-0809%252F92-Loebesedler&pic_file=2akv19.jpg&pic_count=43&mainfoldernr=12) Is that by design/decision, or an error in an re-mapping of old urls? Answers: username_1: Nej. Det er en fejl. Jeg kigger på det. username_1: Problemet er at i `folder=12-0809%252F92-Loebesedler` er `/` dobbelt encoded som `%252F`. 🤦 Status: Issue closed
tongchen126/Boot-Debian-On-Litex-Rocket
1181288959
Title: Using systemd instead of sysvinit Question: username_0: @username_1 After all my failures on #1 I just tried booting with systemd instead of sysvinit and I get to the login prompt using that. Only when loggin in I encounter a stall. Is this the same behaviour you encountered? Answers: username_0: Interesting...I tried reproducing my own result in a few more test runs but I don't get to login anymore despite not changing anything. Seems like there are some (hardware side?) instabilities causing my system to be incredible flaky. username_0: Personally I would say for me using systemd or sysvinit doesn't make any difference system stability. (If anything systemd produces output for more seconds than sysvinit before freezing.) Did you try switching back to systemd on your stable sysvinit setup? Also in general adding the output you encountered using systemd in this thread or in the readme would be helpful. (Maybe someone has a good idea on how to fix it.) username_0: One interesting thing is that if I remove the sdcard while systemd has seemingly hung up the kernel responds. So it seems like it's definitely a systemd issue or maybe a configuration issue of the kernel to use systemd.
lampepfl/dotty
891799381
Title: Infinite type comparison when narrowing GADT bounds Question: username_0: ## Compiler version `3.0.0` ## Minimized code ```Scala object test { def foo[A <: B, B](m: B) = { m match { case _: A => m match { case _: B => // crash with -Yno-deep-subtypes } } } } ``` ## Output ```scala exception occurred while compiling examples/same-skolem.scala Exception in thread "main" java.lang.AssertionError: assertion failed while compiling examples/same-skolem.scala java.lang.AssertionError: assertion failed at scala.runtime.Scala3RunTime$.assertFailed(Scala3RunTime.scala:11) at dotty.tools.dotc.core.TypeComparer.monitoredIsSubType$1(TypeComparer.scala:224) at dotty.tools.dotc.core.TypeComparer.recur(TypeComparer.scala:1288) at dotty.tools.dotc.core.TypeComparer.isSubType(TypeComparer.scala:186) at dotty.tools.dotc.core.TypeComparer.isSubType(TypeComparer.scala:196) at dotty.tools.dotc.core.TypeComparer.isSub(TypeComparer.scala:198) // many lines at dotty.tools.dotc.Driver.process(Driver.scala:178) at dotty.tools.dotc.Driver.main(Driver.scala:208) at dotty.tools.dotc.Main.main(Main.scala) [error] Nonzero exit code returned from runner: 1 [error] (scala3-compiler / Compile / runMain) Nonzero exit code returned from runner: 1 [error] Total time: 4 s, completed May 14, 2021, 5:53:21 PM ``` ## Expectation Should compile without crashing. ## Cause After the first pattern matching, the GADT bounds will become ``` B >: (?1 : A) ``` The `(?1 : A)` here is the `SkolemType` we created for pattern type in `constrainSimplePatternType`. When typing the second pattern matching, the GADT bounds will become ``` B >: (?1 : A) | (?2 : B) ``` and in `ConstraintHandling#addOneBound` we will ask `isSubType (?1 : A) | (?2 : B) <:< Any?` to check the narrowed bounds. Note that when we are running the subtyping check, the GADT bounds of `B` **have already been been updated** to `(?1 : A) | (?2 : B)`. Related trace on the infinite loop: ``` ==> isSubType (?1 : A) | (?2 : B) <:< Any? ==> isSubType B <:< A? // trying to simplify the OrType // ... <== isSubType B <:< A = false ==> isSubType A <:< B? ==> isSubType A <:< (?1 : A) | (?2 : B)? // GADT bounds of B is used here ==> isSubType B <:< A? // trying to simplify the OrType, again // ... <== isSubType B <:< A = false ==> isSubType A <:< B? ==> isSubType A <:< (?1 : A) | (?2 : B)? // GADT bounds of B is used again // more lines ... infinite loop! ``` @username_1 Answers: username_1: We shouldn't even attempt to record any constraint here, there's no useful information ATM to extract. This is yet another problem that would be best fixed by avoiding the Skolem hack in PatternTypeConstrainer. username_0: If it is not clear how to get rid of the Skolem hack at this point, would it be okay to workaround this issue by checking whether the bound is a Skolem with its underlying type same as the narrowed type parameter? ```scala def isAliasSkolem: Boolean = bound match { case SkolemType(tp) if tp == tr => true case _ => false } if isAliasSkolem then false else ... ``` username_1: The Skolem hack is that in PatternTypeConstrainer, we wrap one of the types into a Skolem and substitute type arguments in the other type to trigger a specific codepath in TypeComparer. This is sort-of morally justified by what a Skolem type is, but it has led to a bunch of subtle issues and at this point we really should just inline the codepath instead. Status: Issue closed
JohnReedLOL/Nineteen_Characters
63270011
Title: Nullpointer exception during attacking Question: username_0: Exception in thread "Thread-2" java.lang.NullPointerException at src.model.constructs.Entity.commitSuicide(Entity.java:573) at src.model.constructs.Entity.checkHealthAndCommitSuicideIfDead(Entity.java:561) at src.model.constructs.Entity.receiveAttack(Entity.java:985) at src.model.constructs.Avatar.receiveAttack(Avatar.java:84) at src.model.MapEntity_Relation.sendAttackToRelativePosition(MapEntity_Relation.java:352) at src.model.MapEntity_Relation.sendAttackInFacingDirection(MapEntity_Relation.java:409) at src.model.constructs.Entity.acceptKeyCommand(Entity.java:526) at src.model.Map.sendCommandToMapWithOptionalText(Map.java:426) at src.io.controller.GameController.sendCommandToMapWithText(GameController.java:214) at src.io.controller.GameController.access$0(GameController.java:192) at src.io.controller.GameController$ChatBoxMiniController.sendTextCommandAndUpdate(GameController.java:61) at src.io.controller.GameController$ChatBoxMiniController.processQueue(GameController.java:136) at src.io.controller.GameController.process(GameController.java:317) at src.io.controller.Controller.sleepLoop(Controller.java:74) at src.io.controller.GameController.run(GameController.java:168) at java.lang.Thread.run(Thread.java:745) Status: Issue closed Answers: username_1: fixed? (probably)
rust-skia/rust-skia
812858688
Title: Build fails when cross-compiling to armv7-unknown-linux-gnueabihf Question: username_0: I'm trying to cross-compile skia-safe for my Raspberry Pi 3B+ (which is `armv7-unknown-linux-gnueabihf`) from my Linux desktop. Currently, I am getting an error on `cargo build --release --target=armv7-unknown-linux-gnueabihf`. It seems to be starting a full build as expected, but stuck on `cmath` and `atomic` header. I suspect this is mostly an LLVM problem. ``` ➜ rust-skia git:(master) cargo build --release --target=armv7-unknown-linux-gnueabihf Compiling skia-bindings v0.37.0 (/home/user/Repos/arm/rust-skia/skia-bindings) error: failed to run custom build command for `skia-bindings v0.37.0 (/home/user/Repos/arm/rust-skia/skia-bindings)` Caused by: process didn't exit successfully: `/home/user/Repos/arm/rust-skia/target/release/build/skia-bindings-a740e7982c3369a4/build-script-build` (exit code: 101) --- stdout cargo:rerun-if-env-changed=SKIA_DEBUG HOST: x86_64-unknown-linux-gnu cargo:rerun-if-env-changed=OPT_LEVEL cargo:rerun-if-env-changed=CC cargo:rerun-if-env-changed=CXX cargo:rerun-if-env-changed=SKIA_OFFLINE_SOURCE_DIR cargo:rerun-if-env-changed=FORCE_SKIA_BUILD cargo:rerun-if-env-changed=FORCE_SKIA_BINARIES_DOWNLOAD STARTING A FULL BUILD cargo:rerun-if-env-changed=SDKROOT Probing 'python' Probing 'python2' Python 2 found: "python2" Synchronizing Skia dependencies Skipping "../src". skia/third_party/externals/d3d12a... @ 169895d529dfce00390a20e69c2f516066fe7a3b skia/third_party/externals/expat @ e976867fb57a0cd87e3b0fe05d59e0ed63c6febb skia/third_party/externals/freetype @ 40c5681ab92e7db1298273ccf3c816e6a1498260 skia/third_party/externals/libgif... @ d06d2a6d42baf6c0c91cacc28df2542a911d05fe skia/third_party/externals/harfbuzz @ 3a74ee528255cc027d84b204a87b5c25e47bff79 skia/third_party/externals/libjpe... @ 64fc43d52351ed52143208ce6a656c03db56462b skia/third_party/externals/libwebp @ 55a080e50af655d1fbe0a5c22954835cdd59ff92 skia/third_party/externals/piex @ bb217acdca1cc0c16b704669dd6f91a1b509c406 skia/third_party/externals/zlib @ eaf99a4e2009b0e5759e6070ad1760ac1dd75461 skia/third_party/externals/libpng @ 386707c6d19b974ca2e3db7f5c61873813c6fe44 skia/third_party/externals/icu @ dbd3825b31041d782c5b504c59dcfb5ac7dda08c skia/third_party/externals/spirv-... @ bdbef7b1f3982fe99a62d076043036abe6dd6d80 Skia args: is_official_build=true is_debug=false skia_enable_gpu=false skia_use_gl=false skia_use_egl=false skia_use_x11=false skia_use_system_libjpeg_turbo=false skia_use_system_libpng=false skia_use_libwebp_encode=false skia_use_libwebp_decode=false skia_use_system_zlib=false skia_use_xps=false skia_use_dng_sdk=false cc="clang" cxx="clang++" skia_use_icu=false target_os="linux" target_cpu="arm" skia_use_expat=true skia_use_system_expat=false extra_cflags=["--target=armv7-unknown-linux-gnueabihf","-O3"] extra_asmflags=["--target=armv7-unknown-linux-gnueabihf"] Done. Made 73 targets from 27 files in 27ms ninja: Entering directory `/home/user/Repos/arm/rust-skia/target/armv7-unknown-linux-gnueabihf/release/build/skia-bindings-667a015589544290/out/skia' [1/682] compile ../../../../../../../skia-bindings/skia/src/core/SkPathMeasure.cpp FAILED: obj/src/core/libpathkit.SkPathMeasure.o clang++ -MD -MF obj/src/core/libpathkit.SkPathMeasure.o.d -DNDEBUG -DSK_R32_SHIFT=16 -DSK_SUPPORT_GPU=0 -DSK_GAMMA_APPLY_TO_A8 -DSKIA_IMPLEMENTATION=1 -I../../../../../../../skia-bindings/skia -Wno-attributes -fstrict-aliasing -fPIC -fvisibility=hidden -march=armv7-a -mfpu=neon -mthumb -O3 -fdata-sections -ffunction-sections -Wno-sign-conversion -Wno-unused-parameter --target=armv7-unknown-linux-gnueabihf -O3 -std=c++17 -fvisibility-inlines-hidden -fno-exceptions -fno-rtti -c ../../../../../../../skia-bindings/skia/src/core/SkPathMeasure.cpp -o obj/src/core/libpathkit.SkPathMeasure.o In file included from ../../../../../../../skia-bindings/skia/src/core/SkPathMeasure.cpp:8: In file included from ../../../../../../../skia-bindings/skia/include/core/SkContourMeasure.h:11: In file included from ../../../../../../../skia-bindings/skia/include/core/SkPath.h:11: In file included from ../../../../../../../skia-bindings/skia/include/core/SkMatrix.h:11: In file included from ../../../../../../../skia-bindings/skia/include/core/SkRect.h:11: In file included from ../../../../../../../skia-bindings/skia/include/core/SkPoint.h:12: In file included from ../../../../../../../skia-bindings/skia/include/core/SkScalar.h:11: ../../../../../../../skia-bindings/skia/include/private/SkFloatingPoint.h:16:10: fatal error: 'cmath' file not found #include <cmath> ^~~~~~~ 1 error generated. [2/682] compile ../../../../../../../skia-bindings/skia/src/ports/SkFontConfigInterface_direct_factory.cpp FAILED: obj/src/ports/fontmgr_FontConfigInterface.SkFontConfigInterface_direct_factory.o clang++ -MD -MF obj/src/ports/fontmgr_FontConfigInterface.SkFontConfigInterface_direct_factory.o.d -DNDEBUG -DSK_R32_SHIFT=16 -DSK_SUPPORT_GPU=0 -DSK_GAMMA_APPLY_TO_A8 -DSKIA_IMPLEMENTATION=1 -I../../../../../../../skia-bindings/skia -Wno-attributes -fstrict-aliasing -fPIC -fvisibility=hidden -march=armv7-a -mfpu=neon -mthumb -O3 -fdata-sections -ffunction-sections -Wno-sign-conversion -Wno-unused-parameter --target=armv7-unknown-linux-gnueabihf -O3 -std=c++17 -fvisibility-inlines-hidden -fno-exceptions -fno-rtti -c ../../../../../../../skia-bindings/skia/src/ports/SkFontConfigInterface_direct_factory.cpp -o obj/src/ports/fontmgr_FontConfigInterface.SkFontConfigInterface_direct_factory.o In file included from ../../../../../../../skia-bindings/skia/src/ports/SkFontConfigInterface_direct_factory.cpp:8: ../../../../../../../skia-bindings/skia/include/private/SkOnce.h:12:10: fatal error: 'atomic' file not found [Truncated] 1 error generated. [8/682] compile ../../../../../../../skia-bindings/skia/src/core/SkPathEffect.cpp FAILED: obj/src/core/libpathkit.SkPathEffect.o clang++ -MD -MF obj/src/core/libpathkit.SkPathEffect.o.d -DNDEBUG -DSK_R32_SHIFT=16 -DSK_SUPPORT_GPU=0 -DSK_GAMMA_APPLY_TO_A8 -DSKIA_IMPLEMENTATION=1 -I../../../../../../../skia-bindings/skia -Wno-attributes -fstrict-aliasing -fPIC -fvisibility=hidden -march=armv7-a -mfpu=neon -mthumb -O3 -fdata-sections -ffunction-sections -Wno-sign-conversion -Wno-unused-parameter --target=armv7-unknown-linux-gnueabihf -O3 -std=c++17 -fvisibility-inlines-hidden -fno-exceptions -fno-rtti -c ../../../../../../../skia-bindings/skia/src/core/SkPathEffect.cpp -o obj/src/core/libpathkit.SkPathEffect.o In file included from ../../../../../../../skia-bindings/skia/src/core/SkPathEffect.cpp:8: In file included from ../../../../../../../skia-bindings/skia/include/core/SkPath.h:11: In file included from ../../../../../../../skia-bindings/skia/include/core/SkMatrix.h:11: In file included from ../../../../../../../skia-bindings/skia/include/core/SkRect.h:11: In file included from ../../../../../../../skia-bindings/skia/include/core/SkPoint.h:12: In file included from ../../../../../../../skia-bindings/skia/include/core/SkScalar.h:11: ../../../../../../../skia-bindings/skia/include/private/SkFloatingPoint.h:16:10: fatal error: 'cmath' file not found #include <cmath> ^~~~~~~ 1 error generated. ninja: build stopped: subcommand failed. --- stderr thread 'main' panicked at '`ninja` returned an error, please check the output for details.', skia-bindings/build_support/skia.rs:696:5 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace ``` Answers: username_0: I'm using LLVM/Clang 11, if that matters. ``` ➜ rust-skia git:(master) clang --version Ubuntu clang version 11.0.0-2 Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin ``` username_1: I honestly have no idea if skia is supported on raspberry pi. I think it would be better to raise this issue upstream either with the skulpin project (I'm guessing vulkan isn't super well supported on raspberry pi) or with the skia-safe folks directly as I rely on them for rendering. username_1: I'm gonna close this as I don't think its particularly related to neovide and we don't currently support raspberry pi as a target. username_2: I suspect that you need to set up a `--sysroot` that points to the arm toolchain via the `SDKROOT` environment variable and perhaps add include directories for the C++ headers to cross compile. username_0: Sorry for the late response, that is indeed the case. I missed the part where they specified that `SDKROOT` was indeed needed to cross-compile Skia in the build docs. I think the issue can now be closed. I have a stupid question though, how should I setup an `SDKROOT`? Skia docs itself doesn't mention anything much about this (?) and I'm rather new to C++ tooling. username_2: The contents of the `SDKROOT` environment variable is converted the argument `--sysroot` that is passed as `extra_cflags` to the gn Skia build script, which in turn forwards it to the LLVM / clang invocation. For examples how to cross-compile, there is [android support](https://github.com/rust-skia/rust-skia/blob/master/skia-bindings/build_support/android.rs) in this repository and you'll find a number of examples online that show how to install the Raspberry Pi toolchain and set up the `--sysroot` argument for C++ cross-compilation. To find out how Skia C++ files are built, build rust-skia with `-vv` and look for the line: ``` [skia-bindings 0.37.0] ninja: Entering directory `...\rust-skia\target\debug\build\skia-bindings-ff3956655d496582\out\skia' ``` In this directory, the file `obj/skia.ninja` lists the arguments clang is invoked with. I would appreciate if someone could provide an Dockerfile that creates a build of rust-skia targeting Raspberry Pi which we could then build before publishing a new release. Status: Issue closed
numpy/numpy
416396264
Title: ValueError: setting an array element with a sequence. python Question: username_0: <!-- Please describe the issue in detail here, and fill in the fields below --> I am trying to solve a differential equation contains complex numbers. These complex numbers are constants A and B. ### Reproducing code example: <!-- A short code example that reproduces the problem/missing feature. It should be self-contained, i.e., possible to run as-is via 'python myproblem.py' --> ```python import numpy as np import matplotlib.pyplot as plt # intinialize values x0=0 y0=1 xf=2 h=0.1 n=200 y=np.zeros([n]) t=np.zeros([n]) t[0]=y0 t[0]=x0 t=np.linspace(x0,xf,n) gamma=1.0 width=1.0 v_g=1.0 L=1.0 k=1 r=1.0 C=1j A=((8*np.pi)**(1.0/4.0)/(np.sqrt(width*L)))*(-C/2*np.pi)*np.sqrt(gamma*v_g*L/2) B=(k**2)+C*k*r-((width**2)*(r**2)/4)+r*v_g*t*(width**2/2)-v_g**2*t**2*(width**2/4)+k**2/width**2 for i in range(1,n): t[i]=t[i-1]+h slope=A*np.exp(B)-(1/2)*(y[i-1]) y[i]=y[i-1]+h*slope plt.plot(t,y,'-') ``` <!-- Remove these sections for a feature request --> ### Error message: ValueError: setting an array element with a sequence. <!-- If you are reporting a segfault please include a GDB traceback, which you can generate by following https://github.com/numpy/numpy/blob/master/doc/source/dev/development_environment.rst#debugging --> Can anyone help me to explain the problem and give a suggestion to solve the problem? Status: Issue closed Answers: username_1: Sorry, but I think it will be instructional for you to figure it out on your own: * Check the line that the ValueError points to and read the error message carefully * Inspect the code on the line and/or split it up into its components. * Inspect all the operands involved, you can use the debugger or simply print statements. The numpy issue tracker is not a support platform, we do generally help out, but such debugging question should be asked elsewhere normally. You can even probably find similar questions by searching for the error message which probably gives enough hints on how what is wrong. (although again, inspecting what you are asking the code to do will tell you very quickly).
friznit/Unofficial-Tantares-Wiki
807873911
Title: No real guide to assemble craft Question: username_0: Hello! With all due respect, this wiki is great, but it's just not your BDB wiki. You seem to be the only person I've found that really knows how to assemble the Tantares craft accurately, but at least on the more detailed craft, like ISS or Mir modules, it seems you just attached graphs to the wiki pages, which, in my opinion, makes it a bit more confusing than it really is. I'd appreciate if you could show how to assemble the ISS-Mir Zvezda and Zarya modules. I know this might sound a bit like "yo you lazy wake up and tell me how to do this bitch", but it isn't. I'm just a bit confused by this and am asking for help. - Matias<issue_closed> Status: Issue closed
MicrosoftDocs/feedback
1023871480
Title: Windows CNG : BCryptEncrypt API failing for AES-128-CFB-128 when the input plaintext is size is not multiple of 16 and padding is off Question: username_0: BCryptEncrypt API but API is failing when input plaintext is size is not multiple of 16 and padding is off with error _0xc0000206/STATUS_INVALID_BUFFER_SIZE_. **AES-CFB encryption is self-synchronizing stream cipher** so it should run if the input plain text length is not multiple of blocksize. I tested AES-128-CFB-128 with OpenSSL for the same plain text and padding is off then it generating the ciphertext but windows CNG is failing for the same. Please visit [link](https://docs.microsoft.com/en-us/answers/questions/579705/bcryptencrypt-api-failing-for-aes-128-cfb-128-when.html) for more details Answers: username_1: Doc is here: https://docs.microsoft.com/en-us/windows/win32/api/bcrypt/nf-bcrypt-bcryptencrypt#parameters. Tagging @drewbatgit
biocodellc/geome-ui
607737789
Title: Add discoverable attribute to project overview Question: username_0: When looking at a project overview add discoverable attributes to project overview. This is a new project metadata attribute recently committed to geome-ui and geome-db. Here are the conditions for display on the project overview: public=false; discoverable=true, display: Visibility: This is a private project but is discoverable public=false; discoverable=false, display: Visibility: This is a private project and is not discoverable public=true, display: Visibility: This is a public project Answers: username_0: This is done. Status: Issue closed
Level/leveldown
437742452
Title: Refactor chained and array-form batch to both use the `Batch` struct Question: username_0: And thus avoid the duplicate logic for `hasData` as well as the `sync` option. Status: Issue closed Answers: username_0: Closing because there's no real benefit, or real pain points. It'd mostly just be refactoring for the sake of refactoring. If there were bugs (that we didn't already fix since opening this issue) I'd feel different.
pattern-lab/plugin-node-tab
214891487
Title: Plugin not working Question: username_0: I'm not clear on how this plugin even works. I've installed it, run the postinstall (which by the way adds the asset folders back to my root directory when I've changed that in the settings config) and I add a sass file next to the `button.mustache` but then I get an error: ``` plugin-node-tab: There was an error parsing sibling JSON for 00-atoms/buttons/button.mustache { Error: ENOENT: no such file or directory, stat '/edition-node-gulp/src/_patterns/00-atoms/buttons/button.js' at Object.fs.statSync (fs.js:898:18) ``` Not only does it not show `.js` files in the settings docs, but it also shows as everything else as optional. Also, I'm not even clear what this plugin is doing? Is it adding a tab above the top black navigation? Answers: username_1: Hi Cameron Somehow the notification for this issue fell through the cracks. The plugin was working for me last check, but I will make sure to test it anew this week.
nunomaduro/phpinsights
1033110402
Title: Able to set timeout limit Question: username_0: | Q | A | ---------------- | ----- | Bug report? | no | Feature request? | yes | Library version | 2.0.1 Hi can we able to set this dynamically by adding `->setTimeout()` ? https://github.com/nunomaduro/phpinsights/blob/d07b45bb8add1f608fd007efbb989ecca59e3e96/src/Domain/Insights/SyntaxCheck.php#L54 I have an issue of `timeout` after adding more feature on a large project `php artisan insight --fix` ```bash 827/830 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░] 99% Symfony\Component\Process\Exception\ProcessTimedOutException The process "/usr/bin/php7.4 '/home/vagrant/code/laravel/project-x/vendor/bin/parallel-lint' --no-colors --no-progress --json --exclude 'bower_components' --exclude 'node_modules' --exclude 'vendor' --exclude 'vendor-bin' --exclude '.phpstorm.meta.php' --exclude 'config' --exclude 'storage' --exclude 'resources' --exclude 'bootstrap' --exclude 'nova' --exclude 'database' --exclude 'server.php' --exclude '_ide_helper.php' --exclude '_ide_helper_models.php' --exclude 'app/Providers/TelescopeServiceProvider.php' --exclude 'public' --exclude 'swagger' --exclude 'routes/console.php' --exclude 'deprecated.php' --exclude 'vendor' --exclude 'tests' --exclude 'Tests' --exclude 'test' --exclude 'Test' ." exceeded the timeout of 60 seconds. at vendor/symfony/process/Process.php:1217 1213▕ 1214▕ if (null !== $this->timeout && $this->timeout < microtime(true) - $this->starttime) { 1215▕ $this->stop(0); 1216▕ ➜ 1217▕ throw new ProcessTimedOutException($this, ProcessTimedOutException::TYPE_GENERAL); 1218▕ } 1219▕ 1220▕ if (null !== $this->idleTimeout && $this->idleTimeout < microtime(true) - $this->lastOutputTime) { 1221▕ $this->stop(0); +7 vendor frames 8 [internal]:0 NunoMaduro\PhpInsights\Domain\Insights\InsightCollectionFactory::NunoMaduro\PhpInsights\Domain\Insights\{closure}() +17 vendor frames 26 artisan:37 Illuminate\Foundation\Console\Kernel::handle() Script @php artisan insight --fix handling the insight-fix event returned with error code 1 ``` in my case trying this will working ```php $process ->setTimeout(60*5) ->run(); ``` Thank you Answers: username_1: This would make a good PR, adding in an option to set timeouts would be cool.
kostaskougios/cloning
196937124
Title: Clone to Different class Question: username_0: Usage for coping attributes into different Class instance is missing. ex: BusinessObject A { Sting attt1; List<ClassB> attr2; Template attr3 } BusinessObjectADTO { Sting attt1; List<ClassB> attr2; } How do I copy BusinessObject A into BusinessObjectADTO ? Please usuggest
TheAlgorithms/Python
1017751970
Title: Piled Up PRs Question: username_0: There are so many unmerged PRs awaiting reviews. People have put in so much in making the algos and filing PRs.. are you guys short on maintainers? Pls atleast address the issue! Status: Issue closed Answers: username_1: Please be patient and in the meantime, you can review others' PRs. PRs with tests/types/docs alerts from bots have a lower priority to be reviewed.
clash-lang/clash-compiler
74848107
Title: How to define Block ROM BlackBox... Question: username_0: Hi Christiaan, Considering that both _Altera_ and _Xilinx_ support *Block ROM* _IP Core_ generators, and these seem to work quite fast, I'd love to see *Block ROM* black box defined. Could you please lend me a hand, and give a tutorial about how to define such a custom `BlackBox`? Is it something an average user could do, or is it supposed to be hard-wired in the internals of the compiler? Answers: username_1: There are already black boxes for the following functions: http://hackage.haskell.org/package/clash-prelude-0.7.5/docs/CLaSH-Prelude-BlockRam.html There's also already a section in the tutorial on custom black boxes: http://hackage.haskell.org/package/clash-prelude-0.7.5/docs/CLaSH-Tutorial.html#g:13 Could you elaborate what you feel is missing from the tutorial? Status: Issue closed username_0: Thanks a lot for info!
usgs/groundmotion-processing
875953316
Title: store noise time series in the output HDF file Question: username_0: The signal-to-noise ratio can be used to evaluate the quality of processed ground motions and determine the usable frequency in the Fourier domain. For some projects, the SNR will be required to be computed for different wave windows (e.g., S-wave window). Therefore, I was wondering if the noise time series, which is picked during the signal processing, could be stored in the output HDF file for each event.<issue_closed> Status: Issue closed
Wolkabout/WolkGateway
497519139
Title: Persistence and timestamps Question: username_0: In cases where modules send data without timestamps to the gateway, but the gateway is not connected to the platform at the moment, the data is placed into storage. Is this data then assigned a timestamp when being placed into storage in order to prevent it from being treated as "live" when connection is restored and stored data is published? Answers: username_1: No, gw does not modify message content which rtc is a part of Status: Issue closed username_0: Marking resolved as this will be done in Module SDKs then instead.
BeamMW/beam
681679445
Title: UI Wallet crashes, when we create new transaction to offline UI Wallet with swap "in progress" and then re-open closed wallet. Question: username_0: **Bug description** When we try to send some coins to offline wallet that has swap "in progress" and then re-open closed wallet, it crashes. The reciever wallet has swap "in progress" (node connection) and the sender wallet has the same swap in "failed" status (also node connection). **To Reproduce** 1. Open two UI Wallets - sender and reciever (UI version 5.0.9484.3083) 2. Send some coins from sender to closed reciever wallet 3. Re-open the reciever wallet, when the sender wallet has transaction status "Waiting for reciever" **Actual** Reciever wallet is crashed **Expected** Reciever wallet is opened and transaction status in both wallets is "In progress" [logs sender wallet.zip](https://github.com/BeamMW/beam/files/5095182/logs.sender.wallet.zip) [wallet.db sender.zip](https://github.com/BeamMW/beam/files/5095185/wallet.db.sender.zip) [Beam Wallet Masternet DMP.zip](https://github.com/BeamMW/beam/files/5095189/Beam.Wallet.Masternet.DMP.zip) [logs reciever wallet.zip](https://github.com/BeamMW/beam/files/5095190/logs.reciever.wallet.zip) [wallet.db reciever.zip](https://github.com/BeamMW/beam/files/5095191/wallet.db.reciever.zip) Sender wallet password - 1 Reciever wallet password - 1 Answers: username_1: The problem is with incompatible swap settings. I'm not sure how this happened. Please, try to reproduce it again. Status: Issue closed username_0: Сannot reproduce. Сhecked on master 5.0.9504.3103ю
MartMan612/WoodenWitch
505577360
Title: **Automation** Question: username_0: [Automatically move your cards](https://help.github.com/articles/configuring-automation-for-project-boards/) to the right place based on the status and activity of your issues and pull requests. Status: Issue closed Answers: username_0: [Automatically move your cards](https://help.github.com/articles/configuring-automation-for-project-boards/) to the right place based on the status and activity of your issues and pull requests. Status: Issue closed username_0: [Automatically move your cards](https://help.github.com/articles/configuring-automation-for-project-boards/) to the right place based on the status and activity of your issues and pull requests. username_0: tested Status: Issue closed
tidyverse/tidyverse
665820419
Title: Add tidyselect and other packages? Question: username_0: I wonder if you would be OK for having tidyselect as part of the package list. This could be PR together with #216... But perhaps it should be first decided whether or not other packages would also need to be included in the list. E.g. if {rlang} stays in, perhaps {vctrs} should be included too. Answers: username_0: Related to that, purrr looks a bit like an odd ball compared to the 7 other core packages. I think that these days, `dplyr::across()` and `dplyr::c_across()` will probably replace calls to mapping functions and I am not sure users rely on purrr for much else. So, if you wanted to keep the core packages as those focussed on basic data manipulation, perhaps purrr should no longer be included within the list of core packages. username_1: tidyselect isn't a user facing package so doesn't need to be included. Status: Issue closed
gedankenstuecke/snpr
85013677
Title: Bug in Variations you did not enter Question: username_0: Hi when in the section "Variations you did not enter", when you initially click on one (in thie case Astigmatism) I am greeted with a popup where you can either write a text to describe your variation or choose from the list below. I initially clicked one of the options and the free text option on top disappeared. However, after reading the options I decided none of them really described my phenotype correctly so because I wanted to use the free text option I closed the popup without clicking OK When I clicked on Astigmatism option again the free-text option would not show up again and one the multiple choice options was selected. I believe if the user does not click ok and navigates out of the popup, the multiple choice he innitialy clicked should not be accepted as valid and thus when navigating back to the variation, the free-text option should be visible. Answers: username_1: Thank you very much for reporting this! You earned +1 bug reporting karma. :) One of us will look into it as soon as he finds the time. username_0: you're welcome. username_2: This bug even goes a bit further, because it removes **all** free-text input fields as I've just found out. This is due to the fact that we do a blanket `$('input[type=radio]').click(function()` in the view, without specifying to what the button should belonging to. So fixing that might be a bit more complicated? username_3: I think this code snippet is all that is needed, but I'm not sure what file it should go in (I'm unfamiliar with ruby project layout). ```javascript $('.close').click(function(){ $('input[type=text]').show(); }) ``` username_2: I guess that could go into [`app/views/user_phenotypes/_form.html.erb`](https://github.com/openSNP/snpr/blob/67ff132c3b97a9c626d4a4c07a520c8e37ed3d7f/app/views/user_phenotypes/_form.html.erb)? But I assume that @username_1 has a better idea of how to correctly do it 😉 username_2: Ah, thanks. Do we already have a Issue open for that? username_1: Not that I know of. username_3: @username_2 : can you tell me how to reproduce this bug "remove[ing] all free-text input fields"? Looking [this line](https://github.com/openSNP/snpr/blob/67ff132c3b97a9c626d4a4c07a520c8e37ed3d7f/app/views/user_phenotypes/_new.html.erb#L34) I wouldn't expect that behavior, and if it is true there may be more to this bug than I originally thought. @username_1 rightly suggested I be more specific with my jquery selector, but then that won't address the other affected text fields. username_2: @username_3: Yes, I think the issue reported here is another one: After logging in there will be a list of phenotypes you've not entered yet. If you click on the button to add new ones the modal will open giving you the different options. ![screen shot 2016-12-28 at 11 38 00](https://cloud.githubusercontent.com/assets/674899/21520113/4a84c014-ccf2-11e6-9862-4c81a6430f76.png) Once you click on one of the radio buttons the free-text input field in the modal will disappear. ![screen shot 2016-12-28 at 11 38 05](https://cloud.githubusercontent.com/assets/674899/21520148/8b30d86e-ccf2-11e6-9209-0e2a827555c9.png) Now, if you close that modal and open it again the free text field will stay hidden and there's no way to get it back. Does that help? 😄 Status: Issue closed
yiisoft/yii2
100266796
Title: Class 'yii\bootstrap\Html' not found Question: username_0: I'm having a _PHP Fatal Error – yii\base\ErrorException_. This happens when I began to include <code>use yii\bootstrap\Html;</code> and use <code><?= Html::submitButton('Resend Activation Link', ['class' => 'btn btn-primary']) ?></code> somewhere in one of my views file. Under the directory _vendor/yiisoft/yii2-bootstrap_, I couldn't find the file _Html.php_. ![screen shot 2015-08-11 at pm 05 03 18](https://cloud.githubusercontent.com/assets/11458029/9194060/0effdc70-404b-11e5-8605-8c8ef3b31628.png) I just did an upgrade to Yii version 2.0.6 from 2.0.5 and the file _Html.php_ did not appear too. For testing sake, I did a totally fresh Composer installation as per http://www.yiiframework.com/doc-2.0/guide-start-installation.html. Nope, _Html.php_ did not appear too. I'm very tempted to just download https://raw.githubusercontent.com/yiisoft/yii2-bootstrap/master/Html.php and paste into the yii2-bootstrap directory. Kindly advise me. Thank you for your time! :heart_eyes: Answers: username_1: 1) delete the /vendor/yiisoft directories 2) delete your composer cache this will solve it Status: Issue closed username_0: This is what I did: <code>rm -rf yiisoft</code> <code>sudo composer self-update</code> <code>sudo composer clear-cache</code> <code>cd myprojectdirectory</code> <code>composer update</code> ![screen shot 2015-08-12 at pm 03 40 55](https://cloud.githubusercontent.com/assets/11458029/9218870/e86046a4-4108-11e5-9b01-1d7ceeb9fc9a.png) _Html.php_ still did not appear. I actually dig into _~/.composer/cache/files/yiisoft/yii2-bootstrap/1b6b1e61cf91c3cdd517d6a7e71d30bb212e4af0.zip_ and extract the zip file and couldn't find the _Html.php_ too. Any solution? Thanks for your time. :cry: username_0: I digged into https://github.com/yiisoft/yii2-bootstrap/releases and downloaded 2.0.4.zip. ___Html.php_ was not inside__! username_1: true, I just wanted to mention it ;) username_2: Yes. It's not released yet.
SoftverInzenjeringETFSA/SI2014Tim3
81331504
Title: BAG7 Question: username_0: Komponenta: AutobusiForma commit ID: f833da1 Opis problema: Prikaz dva messageBox-a sa ispisom tipa greške Koraci za reproduciranje problema: 1. Pokrenuti aplikaciju 2. Prijaviti se kao menadžer 3. Kliknuti na dugme "Evidencija autobusa"/ 4. Ići na modifikuj tab 4. Unijeti broj mjesta neki decimalni broj, npr 54.7 5. Kliknuti na "Dodaj" Dobijeni rezultat: -Obavijest o grešci, zatim ispis tipa greške ![image](https://cloud.githubusercontent.com/assets/11446476/7831221/f6588546-0454-11e5-8ebb-338ccae3c28c.png) Očekivani rezultat: Samo jednom obavjest o grešci Status: Issue closed Answers: username_1: Uneseno je rucno u bazu vise autobusa sa istim registarskim tablica, kroz forme je to nemoguce unijeti, a korisnik aplikacije ce vrsiti samo manipulacije sa formama, nece imat pristup bazi i rucno unositi, sto onda razvijati aplikaciju.
xiph/rav1e
626052928
Title: Encoder statistics recoding for bottom-up partition search is not feasible Question: username_0: First of all, I think you the author for encoder statistics recording feature, which is turned on "-v". I found that, without additional save/restore code for mode decisions, it is impossible to gather statistics for coding modes for bottomup partition search. Because, the mode decision is done post-encoding, then currrnet rav1e only rewinds the entropy coder after best mode is chosen, with no other info like coding modes are restored. So, "-v" option with speed 0 or 1 (i.e. bottomup partition), it records the encoding statistics of RDO as well. Answers: username_0: FYI, I was able to catch this case coincidentally, when my patch with bug run at speed 0 and the -v option says lots of 4x4s or 8x8s while the bitstream rarely have it as checked with bitstream analyzer. username_1: Yeah, I noticed this was a possibility when implementing this. I didn't have proof that it was wrong, but speculated based on the code. However, I left it in because I still assumed it to be feasible and fixable later. Question is what's the best thing to do with it? Disable stats recording if we are doing bottom-up encoding? username_0: Hey- thanks for replying this quick. Yes, I think simply not support the feature if partition search method is bottomup. Unless rav1e has libaom's huge tree style storage to remember every partition decision in a SB, I think there is no way to pick the final decisions, i.e. partition decisions. That information (partition decisions) is only available from decoder! username_1: This makes me think that it might even make sense to remove the statistics entirely from rav1e, and instead add a feature to aomanalyzer (or something) to display encode-wide statistics on decode. Maybe aomanalyzer isn't the right tool since currently it only decodes one frame at a time. But something at decode-time. username_0: Hi, so I didn't really want to discourage and drop all these helpful and supporting features, because the statistics collection in encoder side is in fact very crucial tool to enhance the video encoders. Before saying those now popular machine intelligence schemes, please recall that how those hundreds of probabilities (CDF, Cumulative Density Functions) are acquired for AV1, and all past standards like H.264 (I remember there was more than 400 CDFs). They are all collected from encoder side statistics! So, I don't really discourage the idea which has just begun. And I strongly believe we can devise some working method! username_0: If you give me some time (like days) and hold it, I can peruse the encoder stat collection code in top-down partition search carefully and if it looks big task to fix it or seems infeasible as in bottom-up.
sccn/labstreaminglayer
248480188
Title: Undefined function 'lsl_resolve_all_' in Matlab Viewer Question: username_0: Hello, I have downloaded labstreaminglayer, installed dependencies, and added the liblsl-Matlab and the Apps/MATLABViewer to the Path. However, when I type "vis_stream", I get the following error: Answers: username_1: See readme.txt at https://github.com/sccn/labstreaminglayer/tree/master/LSL/liblsl-Matlab Especially note "The bin/ directory needs to contain an up-to-date build of the library file for your MATLAB version (e.g. liblsl64.dll for 64-bit MATLAB on Windows)." username_0: Thank you, yes I compiled the latest liblsl64.dylib according to https://github.com/sccn/labstreaminglayer/blob/master/LSL/liblsl/INSTALL and copied them in the /liblsl-Matlab/bin folder, but still getting the same error. Any other suggestions? username_1: It is difficult to tell from here. Almost certainly it is a problem with files/functions not being in the path that you think are available. Here is a post that discusses issues like you are describing: https://www.mathworks.com/matlabcentral/newsreader/view_thread/328532.html? It may be helpful. username_1: I googled "not finding functions in the mex folder". username_2: On a Mac, you often can't fix a missing library problem just by copying it into a folder that happens to be on the path, you have to copy it into a folder where the executable will look, and where the library thinks it should belong. Start by doing `otool -L` on each of the mex files and on the dylib, then look into `install_name_tool` to fix the problem. Please tell us which instructions you followed to build the dylib, where you got the mex files from (or if you compiled them yourself, which instructions you followed), and report the output of otool. username_0: Thank you both, I think I found the error. I downloaded the dependencies by running "python get_deps.py" and this installed the .mexw64, .mexw64.md5, .mexa64 and .mexa64.md5 in the liblsl-Matlab/mex folder, but not the .mexmaci64 files, which is what is actually needed to run it from Matlab. So the mex files need to be recompiled. How can I do that? username_0: I found the .mexmaci64 files in the "liblsl-Matlab/mex/build-rolling" folder of the liblsl-Matlab-1.11 library in the ftp: ftp://sccn.ucsd.edu/pub/software/LSL/SDK/liblsl-Matlab-1.11.zip Would be great if these mex files could be also included with the official dependencies that are downloaded with the get_deps.py script. Thank you all for your help username_3: Could you try the newly added `build_mex.m`?
argoproj/argo-rollouts
510781708
Title: Have Preview Service always point at something Question: username_0: In order to prevent 502s on the preview service, the preview service should always point at something. If there is a new ReplicaSet, the preview service should point at those pods. Otherwise, it should point at the stable ReplicaSet.<issue_closed> Status: Issue closed
Louiszhai/Louiszhai.github.io
296658732
Title: 位运算文章关于异或使用段落有疑问,希望作者解疑 Question: username_0: 1.倒数n+1应该是从哪边开始数,n应该是多少? 2.能否提供具体的使用案例 3.文中它的二进制补码形式 它是指 a ^ b,指a呢还是指b Answers: username_1: 这句我可能断句不太对,x -(2的n次方),2的n次方是一个整体,请结合理解下。 倒数第n+1位,倒数是从右往左数,那么对于 `14^8` 这样的位运算,8便是2的3次方,n值为3,那么n+1就是倒数第4位,即从右往左数的第4位,对于14而言,其补码为`1110`,倒数第4位是1,应该减去8,那么最终结果套用以上公式得:`14^8 === 14-8 === 6`; 若14换成16,16的补码为`10000`,其倒数第4位是0,那么应该加上8,最终结果为24。 Status: Issue closed
orbitalquark/textadept
1048575139
Title: Error compiling textadept-curses on Linux Question: username_0: I'm having problems with the textadept-curses compilation ``` make PREFIX=/usr -C src textadept-curses make[2]: Entering directory '/home/paag/Devel/ta/textadept/src' g++ -c -Os -std=c++17 -pedantic -DCURSES -DSCI_LEXER -DNDEBUG -Iscintilla/include -Iscintilla/src -Ilexilla/include -Ilexilla/lexlib -Wall -D_DEFAULT_SOURCE -D_XOPEN_SOURCE=600 scintilla/curses/ScintillaCurses.cxx -o ScintillaCurses.o scintilla/curses/ScintillaCurses.cxx: In constructor ‘ScintillaCurses::ScintillaCurses(void (*)(void*, int, SCNotification*, void*), void*)’: scintilla/curses/ScintillaCurses.cxx:948:34: error: ‘Curses’ is not a member of ‘Scintilla::CaretStyle’ 948 | vs.caret.style = CaretStyle::Curses; // block carets | ^~~~~~ scintilla/curses/ScintillaCurses.cxx: In member function ‘void ScintillaCurses::UpdateCursor()’: scintilla/curses/ScintillaCurses.cxx:1188:57: error: ‘Curses’ is not a member of ‘Scintilla::CaretStyle’ 1188 | if (hasFocus && FlagSet(vs.caret.style, CaretStyle::Curses)) curs_set(in_view ? 1 : 0); | ^~~~~~ make[2]: *** [Makefile:171: ScintillaCurses.o] Error 1 make[2]: Leaving directory '/home/paag/Devel/ta/textadept/src' ``` Answers: username_1: It looks like *src/scintilla.patch* needs to be reapplied to *src/scintilla/*. There has been a recent change in this area. From *src/* try running `touch scintilla.patch` and then `make scintilla`. Or simply delete *src/scintilla/* and run `make scintilla` to re-download it and then patch it. username_0: -- Fragen sind nicht da um beantwortet zu werden, Fragen sind da um gestellt zu werden <NAME> username_1: It looks like you are not trying to compile the latest beta, but an older version. I recommend `make clean-deps` followed by `make deps` and then `make curses`. username_1: That's annoying. It looks like the tarball has changed. You can either `dos2unix` the *scintilla/* directory or grab this archived tarball: https://github.com/username_1/textadept-build/blob/default/scintilla513.tgz. I will notify upstream. username_0: Any news from upstream? Thx, /PA username_1: I don't think Neil wants to go through the steps to re-release. username_0: OK, I adapted src/Makefile to grab the tgz from the textadept repo and was able to compile. Couldn't that be a more future-proof solution? Just my .0002 cents /PA username_1: This is the first time one of Neil's tarballs has not worked, and this was for a beta release. If this happened for a stable release, then perhaps a workaround would be a good idea. I think people building from source would be more comfortable with archives from official sources rather than my own uploads. username_0: Hi, have you thought about upgrading to 514? Most of the stuff in the patch seems to be already in upstream: ``` Applying scintilla.patch patching file gtk/ScintillaGTK.cxx Hunk #1 succeeded at 913 (offset 28 lines). patching file src/XPM.cxx Hunk #1 succeeded at 96 (offset 4 lines). patching file gtk/ScintillaGTK.cxx Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file gtk/ScintillaGTK.cxx.rej patching file include/Scintilla.h Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file include/Scintilla.h.rej patching file include/Scintilla.iface Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file include/Scintilla.iface.rej patching file include/ScintillaTypes.h Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file include/ScintillaTypes.h.rej patching file src/EditView.cxx Reversed (or previously applied) patch detected! Skipping patch. 8 out of 8 hunks ignored -- saving rejects to file src/EditView.cxx.rej patching file src/Editor.cxx Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file src/Editor.cxx.rej patching file src/PositionCache.cxx Reversed (or previously applied) patch detected! Skipping patch. 3 out of 3 hunks ignored -- saving rejects to file src/PositionCache.cxx.rej patching file src/PositionCache.h Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file src/PositionCache.h.rej patching file src/ViewStyle.cxx Reversed (or previously applied) patch detected! Skipping patch. 2 out of 2 hunks ignored -- saving rejects to file src/ViewStyle.cxx.rej patching file src/ViewStyle.h Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file src/ViewStyle.h.rej ``` And it seems to have the right LF termination ;-) username_1: Yes, that is my plan. Status: Issue closed username_1: 11.3 beta 3 updated to Scintilla 5.1.4, and 11.3 is also on it. This issue can be closed.
gatsbyjs/gatsby
362994164
Title: Fetch markdown content from gatsby-source-drupal and parse it with the gatsby-transformer-remark Question: username_0: I've used a Gatsby project using `gatsby-transformer-remark` and it worked great with `md` files. Now I am in the need to use `gatsby-source-drupal` plugin. And the data fetched from Drupal is retrieved as `HTML` and `Markdown` format (I am using a WYSIWYG that output Markdown from Drupal) This is the current used query to fetch data where `value` as Markdown format and `processed` the HTML. ``` query($slug: String!) { nodePage(fields: { slug: { eq: $slug } }) { title path { alias } body { value processed } fields { slug } } } ``` There is a config, action, or event I can use to to transform the Markdown content to HTML via the `gatsby-transformer-remark` plugin to take advantage of among others image and link preprocessing, Thanks Answers: username_1: Hey @username_0, there is no config for that (yet - we will be working on making stuff like that simplier in future), but you can make it work - it requires some post processing, so `remark` plugin would handle those: `gatsby-node.js` ```js exports.onCreateNode = ({ node, actions }) => { if (node.internal.type === 'NodePage') { const { createNode, createNodeId, createNodeField } = actions // take markdown content const content = node.body.value // create node with body content const textNode = { id: createNodeId(`${node.id}MarkdownBody`), parent: node.id, children: [], internal: { type: _.camelCase(`${node.internal.type} MarkdownBody`), // mediaType will allow remark plugin to transform plain // text into markdown node mediaType: `text/markdown`, content, contentDigest: digest(content), }, } createNode(textNode) // add link createNodeField({ node, name: 'markdownBody___NODE', value: textNode.id }) } } ``` And then you should be able to query `fields.markdownBody.childMarkdownRemark`: ```gql query($slug: String!) { nodePage(fields: { slug: { eq: $slug } }) { title path { alias } body { value processed } fields { slug markdownBody { childMarkdownRemark { html } } } } } ``` username_0: Yay! Awesome @username_1 this works great thanks Status: Issue closed
jupyter/jupyter_core
64868807
Title: Downloading as .py fails Question: username_0: Not sure if this is the right location, but I upgraded ipython notebook to version 3.0 using anaconda and therefore using jupiter. ``` ipython notebook --version 3.0.0 ``` I am on a mac running OS X version 10.10.2 When I try using the toolbar to download the notebook as a python file, or html or rst or pdf, I get a page saying ``` 500: Internal Server Error the error was Could not import nbconvert: No module named mistune ``` Thanks for help in advance Answers: username_1: Try `pip install mistune`. Bugs should still be filed at https://github.com/ipython/ipython for now. We're in the process of splitting up the repo, so there will be new repositories like jupyter_notebook and jupyter_nbconvert for now. Status: Issue closed
cu-mkp/manuscript-object
793777517
Title: delete previous derivatives when writing derivatives Question: username_0: During a not dry run (a wet run?) make sure to delete the old derivatives before writing them, to avoid duplicates if we change the naming scheme (or, ahem, [accidentally use the wrong naming scheme](https://github.com/cu-mkp/manuscript-object/pull/69)). Answers: username_0: [`shutil.rmtree()`](https://docs.python.org/3/library/shutil.html#shutil.rmtree) is the best function for this. Status: Issue closed
certbot/certbot
402215465
Title: now we are getting error for 2.7 going to deprecation please update this Question: username_0: pip prints the following errors: ===================================================== DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Exception: Traceback (most recent call last): File "/opt/eff.org/certbot/venv/local/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 176, in main status = self.run(options, args) File "/opt/eff.org/certbot/venv/local/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 346, in run session=session, autobuilding=True File "/opt/eff.org/certbot/venv/local/lib/python2.7/site-packages/pip/_internal/wheel.py", line 848, in build assert building_is_possible AssertionError Answers: username_1: I am going to close this issue related to pip 19.0: * in favor or #3726 to handle the deprecation warning * in favor of #6682 for the AssertionError Status: Issue closed
cubewise-code/arc-issues
334367917
Title: TI Launched from ARC not writting to cube (Architect is ok, ARC in Debug mode is OK) Question: username_0: Hi, I just found a weird issue. I wrote a TI in arc. When I run it, it seems running but the value is not written to the cube. When I run in debug the value is pushed to the cube --> all good. When I run it from perspective the value is pushed to the cube --> all good. I tried the 3 steps above in different order and I consistently have me the same result. I tried to close chrome and re-open it, still the same issue. I restarted ARC service, the issue is still the same. Detail of the process Source: ODBC Plain query (no Parameter passed dynamically) ARC version: v0.9.7 Let me know if you need additional information. Answers: username_0: Hi, I will try to find something repeatable and provide some instructions. username_0: Where Test is a valid element and Test3 is not an element of the dimension. In ARC when executing the process, there is a popup message with ProcessHasMinorErrors but the valid CellPutS is not comitted to the cube. In Architect, there is an error message, the valid CellPutS is comitted to the cube. In ARC when using the debug modes, there is no popup message showing the error, the valid CellPutS is comitted to the cube. ARC 0.9.7 PA 11.2.00000.27 I can provide the CFG if needed. username_1: Hi @username_0 , I don't think this is caused by Arc as it just executes the TI via the REST API. My guess is that the REST API might have a different transaction mechanism. Can you try executing the TI via Postman (https://code.cubewise.com/blog/mastering-the-tm1-rest-api-with-postman) and see if you encounter the same issue? username_2: I have raised this as an issue with IBM support. Seems a pretty significant breaking change if TI processes with minor errors don't commit data changes. username_2: FYI I tested this today using PA 2.0.5 | TM1 11.3.00000.27 It seems to be fixed in the latest version. Although the failure to commit a transaction in the event of minor error isn't explicitly listed as a fixed issue in the 2.0.5 fix list the notes do mention a change to Execute.Process and ExecuteProcess methods and the transaction model and error log status. So it seems with these improvements they also fixed this bug. Status: Issue closed username_1: Hi @username_0 & @username_2, There is a new REST Action called ExecuteWithReturn, Arc will use this in TM1 11.3 IF1 or later.
lilohuang/PyTurboJPEG
948641575
Title: encoding a grayscale image Question: username_0: Hi and thank you for your work on this great package. I'm trying to save a simple 2D `numpy` array into a grayscale image, using this test code: ```python import numpy as np from turbojpeg import TJPF_GRAY, TurboJPEG jpeg = TurboJPEG() rng = np.random.default_rng() image = rng.integers(low=0, high=255, dtype=np.uint8, size=(3008, 4112)) with open("test.jpg", "wb") as output_jpeg_file: output_jpeg_file.write(jpeg.encode(img_array=image, pixel_format=TJPF_GRAY)) ``` and what I get is the following error (obviously, because the input array has only 2 dimensions): ``` height, width, _ = img_array.shape ValueError: not enough values to unpack (expected 3, got 2) ``` I would expect this code to work, since if I pass `pixel_format=TJPF_GRAY` a 2D array should suffice to give all the necessary data to the encoder. Even if I try to augment the dimensions yielding three identical channels (using `np.dstack`), I get a `OSError: Unsupported color conversion request` error on the `encode` command. What am I doing wrong? What can be done to obtain a grayscale JPEG? Status: Issue closed Answers: username_1: Hi @username_0 You need to create the numpy array dimension like below for gray scale image, and set the jpeg subsample to be TJSAMP_GRAY. I will close this ticket and please let me know if it still doesn't work for you. ```python import numpy as np from turbojpeg import TJPF_GRAY, TJSAMP_GRAY, TurboJPEG jpeg = TurboJPEG() rng = np.random.default_rng() image = rng.integers(low=0, high=255, dtype=np.uint8, size=(3008, 4112, 1)) with open("test.jpg", "wb") as output_jpeg_file: output_jpeg_file.write(jpeg.encode(img_array=image, pixel_format=TJPF_GRAY, jpeg_subsample=TJSAMP_GRAY)) ``` Best, Lilo username_0: Thanks a lot @username_1 it works perfectly!
pointlander/compress
183240981
Title: please version this repository Question: username_0: Hello, Could you please tag this repository? I am the Debian Maintainer for this project and tags would help Debian keep up with new releases/bugfixes. See: - http://semver.org/ - http://dave.cheney.net/2016/06/24/gophers-please-tag-your-releases Answers: username_1: done Status: Issue closed username_0: Thanks!
WayofTime/BloodMagic
344480410
Title: Sentient Bow disables Quark's Torch arrows Question: username_0: #### Issue Description: Note: If this bug occurs in a modpack, please report this to the modpack author. Otherwise, delete this line and add your description here. If this is a feature request, this template does not apply to you. Just delete everything. #### What happens: When firing one of Quark's torch arrows from a Sentient bow, the arrow simply sticks in the wall like normal arrows, not even afire. #### What you expected to happen: If a torch arrow strokes a full block, it turns into a torch, otherwise it sticks and displays a burning animation. #### Steps to reproduce: 1. Set up with Quark and Blood Magic 2. Make a Sentient bow and some torch arrows, fire at a wall. 3. ... ____ #### Affected Versions (Do *not* use "latest"): - BloodMagic: 1.12.2-2.2.12-97 - Minecraft: 1.12.2 - Forge: 2707 Answers: username_1: Since TehNut and Arcaratus told me before that BloodMagic doesn't do mod integration, Vazkii will probably need to integrate Sentient Bows into Quark. *If there's any contrary statement from a collaborator, I'll gladly tend to it*
yoanlcq/vek
288424412
Title: It's possible to build good-looking circles with 8 quadratic Bézier curves Question: username_0: Cubic Bézier curves provide `unit_quarter_circle()` and `unit_circle()`. Quadratic Bézier curves should provide `unit_eighth_circle()` and `unit_circle()` as well. Answers: username_0: See also [#7](https://github.com/username_0/vek/issues/7)
conventional-changelog/standard-version
668035524
Title: substitutions dont work when repo is hosted in a subdirectory Question: username_0: **Describe the bug** If repository url is for example https://sigyl.com/git/jsonnet/compose.git then the substitions don't work. **Current behavior** with above url you get host = https://sigyl.com owner [blank] repository [blank] **Expected behavior** A clear and concise description of what you expected to happen. host = https://sigyl.com/git owner = jsonnet repository = compose **Environment** - `standard-version` version(s): 8.0.2 - Node/npm version: v12.13.1 - OS: ubuntu **Possible Solution** <!--- If you have suggestions on a fix for the bug --> **Additional context** Add any other context about the problem here. Or a screenshot if applicable Status: Issue closed Answers: username_0: fixed with "compareUrlFormat": "{{repoUrl}}/compare/{{previousTag}}...{{currentTag}}"
gradle/gradle
990828030
Title: Ignore empty directories on Java sourcepath compile option Question: username_0: The `sourcepath` compile option is missing the `@IgnoreEmptyDirectories` annotation to ignore empty directories. This improves cacheability when empty folders are included in a dirty local environment. ### Expected Behavior Empty directories in the `sourcepath` inputs must be ignored for cache key computation. ### Current Behavior Empty directories in the `sourcepath` inputs are taken into account for cache key computation. ### Context We get build cache misses on compile tasks when empty directories are existent in the defined `sourcepath`. ### Steps to Reproduce See the linked pull request which has an integration test. Status: Issue closed Answers: username_1: Fixed via #18223.
NativeScript/NativeScript
294355714
Title: Error instantiating class NEHotspotConfigurationManager from iOS SDK Question: username_0: However, I do see NEHotspotConfigurationManager in the tns-platform-declarations typings (objc!NetworkExtentions.d.ts to be precise) and as far as I know, NS should support zero-day features in the underlying SDK. ### Which platform(s) does your issue occur on? iOS ### Please provide the following version numbers that your issue occurs with: ``` $ tns info All NativeScript components versions information ┌──────────────────┬─────────────────┬────────────────┬─────────────┐ │ Component │ Current version │ Latest version │ Information │ │ nativescript │ 3.4.2 │ 3.4.2 │ Up to date │ │ tns-core-modules │ 3.4.0 │ 3.4.0 │ Up to date │ │ tns-android │ 3.4.1 │ 3.4.1 │ Up to date │ │ tns-ios │ 3.4.1 │ 3.4.1 │ Up to date │ └──────────────────┴─────────────────┴────────────────┴─────────────┘ ``` ### Is there code involved? If so, please share the minimal amount of code needed to recreate the problem. ``` constructor() { /* NEHotspotConfigurationManager class requires Hotspot Configuration capability in Xcode --> copy entitlements file to app/App_Resources/iOS/<appname>.entitlements */ let wifiManager = new NEHotspotConfigurationManager(); <--- does not work let test = new NEAppProxyFlow(); <--- random other class from same typing file does work } ``` When I do the instantiation outside of the constructor, the error message is not displayed but NS stops execution of that method. Answers: username_1: Hello @username_0, You are not being able to use the **NEHotspotConfigurationManager** class because NEHotspotConfiguration only works on device. username_0: @username_1 I forgot to mention: this is on a real device... username_1: @username_0 what is the device's iOS version? Status: Issue closed username_0: Turns out that I am an idiot... The phone was still running iOS10 :flushed: I made sure to get the latest Xcode and that the iOS simulator was on iOS11, but when I found out that wifi does not work on the simulator, I took an old iPhone that hadn’t been upgraded and I forgot to check the iOS version. I was very tired…. sorry for troubling you and thanks for your help!
typora/typora-issues
700604179
Title: Inconsistent display font for math Question: username_0: Symbols like \oint \oiint \oiiint are not consistent with \int \iint \iiint Here a simple example: ![mathDisplay](https://user-images.githubusercontent.com/71203480/93023417-74293f00-f5b4-11ea-8c96-cc444941b855.png) Instead of that, I would expect to see something like this (for the others as well): ![normalDisplay](https://user-images.githubusercontent.com/71203480/93023508-1ba67180-f5b5-11ea-9be1-fb359a7bbdd1.png) This problem appears to be attached to typora itself because I tried on a Linux machine and the same happens. I don't know if this has a solution but I am happy to hear a workaround. This is important for consistency. Answers: username_1: In my test \oiint \oiiint is not supported by mathjax https://github.com/mathjax/MathJax/issues/566 they seems to be supported by extensions -> https://github.com/mathjax/MathJax/pull/1810 Status: Issue closed username_1: put into #3278 username_2: typora's mathjax version is too old, so it doesn't support `\oint`
philc/vimium
782487828
Title: (sort of bug report: )when using 'f' whatsapp conversations do not appear as hyperlinks Question: username_0: When trying to open a link in whatsapp web (using 'f') to change conversations, they don't appear and I'm not able to navigate by using 'just the keyboard'. Answers: username_1: Please send the screenshot of the issue . username_2: I'm having the same issue; looks like the only clickable areas recognised by Vimium are the search bar and the three buttons at the top: ![image](https://user-images.githubusercontent.com/10012625/110333336-64e1f600-8019-11eb-9ea4-2a7a819bfe44.png) Had a quick look at the source code, and it appears that each conversation is a `div` with an `onMouseDown` event that calls `onClick`. (On Firefox 68.0, with Vimium 1.66.) username_3: Faced the same issue.. if this can be fixed we can save a lot of time while using whatsapp username_4: Um, I have a customized version of Vimium, named [Vimium C](https://github.com/username_4/vimium-c), and its `LinkHints` support an option "`clickable`", which is a string of CSS selector, like `.btn1,.btn2`. For example, a mapping rule may look like: ``` map f LinkHints.activateMode clickable=".user-name,.msg-title" ```
Atlantiss/NetherwingBugtracker
459787648
Title: The Exorcism of Colonel Jules / Digging for Prayer Beads Question: username_0: [//]: # (Enclose links to things related to the bug using http://wowhead.com or any other TBC database.) [//]: # (You can use screenshot ingame to visual the issue.) [//]: # (Write your tickets according to the format:) [//]: # ([Quest][Azuremyst Isle] Red Snapper - Very Tasty!) [//]: # ([NPC] Magistrix Erona) [//]: # ([Spell][Mage] Fireball) [//]: # ([Npc][Drop] Ghostclaw Lynx) [//]: # ([Web] Armory doesnt work) [//]: # (To find server revision type `.server info` command in the game chat.) **Description**: Not sure if this is a bug (couldn't confirm), but the text that is stated in the quest "The Exorcism of Colonel Jules" makes it seem like "Digging for Prayer Beads" is a pre-requisite: "**Take the prayer beads you found** and speak with Anchorite Barada " https://wowwiki.fandom.com/wiki/Quest:The_Exorcism_of_Colonel_Jules https://wowwiki.fandom.com/wiki/Quest:Digging_for_Prayer_Beads **Current behaviour**: On NW, you can do the quest "The Exorcism of Colonel Jules" without first doing "Digging for Prayer Beads" (note my questlog which has "Digging for Prayer Beads" not yet complete: ![WoWScrnShot_062419_045753](https://user-images.githubusercontent.com/51173000/60007288-048e0380-9640-11e9-8c24-f38bbfd43974.jpg) **Expected behaviour**: I couldnt confirm if Digging for Prayer Beads is a pre-requisite for The Exorcism of Colonel Jules, but going off the text "**Take the prayer beads you found** and speak with Anchorite Barada " it seems like it should be. **Server Revision**: 3034 Answers: username_1: Digging for Prayer Beads suppose to be pre-quest of The Exorcism of Colonel Jules https://www.wowhead.com/quest=10916/digging-for-prayer-beads https://www.wowhead.com/quest=10935/the-exorcism-of-colonel-jules https://i.imgur.com/0QrlnMA.png username_2: Fixed in rev 3470. Status: Issue closed
JuliaLang/julia
735295315
Title: Not all Factorizations have Adjoint (namely QR varients) Question: username_0: Encountered during https://github.com/JuliaDiff/ChainRules.jl/pull/302 the simpler form is: I am calling for example: ``` Af = factorize(A) C1 = B1\Af C2 = B2\Af' ``` Which I thought should be a faster way to do ``` C1 = B1\A C2 = B2\A' ``` but it errors depending on the value of `A`. .e.g works if `A` is positive definate since then it gets `Cholesky` which has `Adjoint` defiend on it. It works for square matrixes in general as `LU` has `Adjoint ` But neither `QRPivoted` nor `QRCompactWY` support `adjoint` I belive fixing this would also require adding a method for `ldiv!` on that adjoint. Answers: username_1: Ref https://arxiv.org/pdf/1710.08717.pdf username_0: I mean adjoint on the QR object (i.e. turn it sidewise), **not** the abuse of terminology that is the adjoint of the function (i.e. find a linear operator that has the same rate of change, then find the linear operator that has as it's matrix form the adjoint of the first's matrix form) username_2: related: #35421 username_3: Fixed by #40899 Status: Issue closed
nkolot/SPIN
1047671472
Title: A bug in 'mpi_inf_3dhp.py' Question: username_0: Hello, It seems that a sorting is needed in: https://github.com/nkolot/SPIN/blob/5c796852ca7ca7373e104e8489aa5864323fbf84/datasets/preprocess/mpi_inf_3dhp.py#L93 . I think we are sampling 1 out 10 frames in sequential manner, don't we? Thanks, Shiyang
dddzg/esbc-drunkcat-fans
337389172
Title: kafka消费者生产者启动 Question: username_0: ./kafka-console-producer.sh --broker-list 1156.136.57:9092 --topic final ./kafka-console-consumer.sh --topic final --from-beginning --zookeeper 172.16.17.32:2181,172.16.17.32:2181,172.16.31.10:2181
splunk/splunk-ansible
640370661
Title: When specifying SPLUNK_TAIL_FILE is not reflected in the message at the end of entrypoint.sh Question: username_0: Related to https://github.com/splunk/docker-splunk/pull/163 ` echo Ansible playbook complete, will begin streaming var/log/splunk/splunkd_stderr.log ` should probably be something like this to reflect that we're tailing a different file: ``` if [ -z "$SPLUNK_TAIL_FILE" ]; then echo Ansible playbook complete, will begin streaming var/log/splunk/splunkd_stderr.log else echo Ansible playbook complete, will begin streaming ${SPLUNK_TAIL_FILE} fi ``` (not tested, just freehand code) Answers: username_1: Good call, I can certainly add that Status: Issue closed
scikit-learn-contrib/imbalanced-learn
258521045
Title: Feature Importance for imblearn Methods Question: username_0: How to extract important features of an imblearn-model such as a BalancedBaggingClassifier? Status: Issue closed Answers: username_0: How to extract important features of an imblearn-model such as a BalancedBaggingClassifier? username_1: you can check the feature selected by each tree which is trained: http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html Status: Issue closed username_2: AttributeError: 'BalancedBaggingClassifier' object has no attribute 'feature_importances_' username_1: BaggingClassifier does not implement the feature importance even in scikit-learn. You need to use the BalancedRandomForest to get this working
headwirecom/peregrine-cms
713711085
Title: Prevent delete operation if published Question: username_0: The goal of this ticket is to ensure that Peregrine does not allow users to delete a resource if it's in a published state. This should be enforced in the admin UI as well as the API. This change should help mitigate state issues related to replication. **Use Case** TODO: @username_1 - please consider the impact and changes needed for delete|move|copy|rename|... Answers: username_1: Looking at how the structures will be affected - _move_/_rename_ will be painful to implement :( Both of these operations should probably not be blocked for the author by the replication status. Tha's my gut feeling. _Copy_ only requires us to skip the replication properties. username_2: @username_1 @username_0 @cmrockwell it feels to me like pages probably should not be deletable/movable once published until they unpublished again? username_0: @cmrockwell I agree. Anything that puts the published cache or publish/live server in a stale/invalid state should be prevented. We'd like to handle the move/rename in a separate ticket as a fast follower. Status: Issue closed username_0: This issue has been resolved. A new issue will be logged to provide similar support for move and copy operations.
stephens999/dscr
85195325
Title: vignette does not illustrate reset_dsc Question: username_0: need to have some discussion of features like this somewhere. perhaps in vignette. Answers: username_1: Agreed - one way to do this might be to make a "mistake" with one of the `one_sample_location` methods and then blow it away. We can mention in the text how different subsets of scenarios and methods can be reset, or how the entire dsc can be blown away as well. Do you think this will remain digestible, or should we just do a separate, even more basic kind of `dscr` vignette to illustrate the reset verbs?
HaxeSummit2017/organization
247117428
Title: Kickoff panel for Haxe Summit 2018 Question: username_0: Experience taught us that you can't begin to plan the conference too soon. It actually makes a lot of sense to start planning the next event at the end of the current one, because that's when the audience, the speakers, the sponsors and organizers are in one single place. We can get feedback from all sides and in an ideal scenario we might already be able to round up candidate cities for the next event and get some initial commitments from those present who want to be involved in putting it together. I'd say 1h will give us ample time. Answers: username_1: 👍
ddxygq/ddxygq.github.io
625743519
Title: Spring常见问题 | 柯广的博客 Question: username_0: http://www.ikeguang.com/2019/12/17/spring-issue/ No identifier specified for entity ...错误通常是由于pojo没有加@Id注解, 1234567891011121314@AllArgsConstructor@NoArgsConstructor@Data@ToString@Entitypublic class User implements Serializable &#123; private st
pixelant/pxa_product_manager
474431908
Title: Assignment of subcategories (multilanguage) Question: username_0: My customer has a large product range with categories and subcategories. On the list page in page tree, we chose your Plugin with "List view (Possible navigation)". In the default language everything works fine: the page shows me all available subcategories first, then I can click into the subcategorie to show the related products. Plus I can see the subcategories in backend, tab "Sub-Categories" in the super category. When we translate the products, the assignment of subcategories does not seem to work. Neither in frontend nor in backend tab "Sub-Categories", the sub categories are available (thus assigned as parent category of the subcategory dataset itself). Answers: username_0: To visualize, please see the attached screenshots ![subcategories-en](https://user-images.githubusercontent.com/9283637/63169060-20a59400-c036-11e9-9f94-387fa1788c14.PNG) ![subcategories-de](https://user-images.githubusercontent.com/9283637/63169061-20a59400-c036-11e9-98d7-2c66fc0465a6.PNG) Status: Issue closed username_1: Categories are not relevant anymore.
ProjectEvergreen/greenwood
938146573
Title: extend development server to watch and reload on custom extensions Question: username_0: ## Type of Change - [ ] New Feature Request - [ ] Documentation / Website - [x] Improvement / Suggestion - [ ] Bug - [ ] Other (please clarify below) ## Summary Observed in #650 , right now our live reload server only watches for a [fixed set of files](https://github.com/ProjectEvergreen/greenwood/blob/v0.13.0/packages/cli/src/plugins/server/plugin-livereload.js#L10). ```js this.liveReloadServer = livereload.createServer({ exts: ['html', 'css', 'js', 'md'], applyCSSLive: false // https://github.com/napcs/node-livereload/issues/33#issuecomment-693707006 }); ``` ## Details It would be nice to open this up a couple ways 1. Directly via the `devServer` config, something like `watchExtensions` or something? 2. By adding extensions from custom resource plugins, like #647 (and default, should it just watch everything?)<issue_closed> Status: Issue closed
fedora-java/javapackages
181071772
Title: AttributeError: 'ConnectionResetError' object has no attribute 'message' Question: username_0: [Reproducible on Travis](https://travis-ci.org/fedora-java/javapackages/builds/164013447). Happens only with Python 3. ``` Traceback (most recent call last): File "/home/kojan/git/javapackages/python/javapackages/common/mock.py", line 61, in install_artifact status = fo.readline() File "/usr/lib64/python3.5/socket.py", line 575, in readinto return self._sock.recv_into(b) ConnectionResetError: [Errno 104] Connection reset by peer During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/kojan/git/javapackages/test/../java-utils/request-artifact.py", line 47, in <module> install_maven_artifact(sys.argv[2]) File "/home/kojan/git/javapackages/python/javapackages/common/mock.py", line 73, in install_maven_artifact install_artifact(artifact.get_rpm_str(compat_ver=artifact.version)) File "/home/kojan/git/javapackages/python/javapackages/common/mock.py", line 66, in install_artifact raise ConnectionException(e.message) AttributeError: 'ConnectionResetError' object has no attribute 'message' ```<issue_closed> Status: Issue closed
rustychris/ugrid_visit
286950389
Title: Compilation issue with VisIt 2.13.0, OSX Question: username_0: Related to missing references to FileFunctions::ReadAndProcessDirectory. Answers: username_0: More generally, on High Sierra, using either stock gcc/g++ (i.e. clang) or with homebrew gcc-4.9, cmake dependencies are broken, resulting in failed dependencies referencing miller86. This is presumably related to [this bug report](https://visitbugs.ornl.gov/issues/2611). username_0: cmake dependencies on libz.dylib can be solved by the script fix_depends.sh, after editing the paths therein. This is just a short script to basically follow the suggestion in the above linked bug report. username_0: ReadAndProcessDirectory shows up in several libraries: - libengine_par.dylib - libengine_ser.dylib - libgui.dylib - libviewercore_par.dylib - libviewercore_ser.dylib - libvisitcommon.dylib Of those, only visitcommon is currently in the link command (visible with make VERBOSE=1) visitcommon has that symbol as $ nm -gU libvisitcommon.dylib | grep ReadAnd 00000000000c3970 T __ZN13FileFunctions23ReadAndProcessDirectoryERKSsPFvPvS1_bblES2_b which demangles to FileFunctions::ReadAndProcessDirectory(std::string const&, void (*)(void*, std::string const&, bool, bool, long), void*, bool) Compare that to the undefined symbol: __ZN13FileFunctions23ReadAndProcessDirectoryERKNSt3__112basic_stringIcNS0_11char_traitsIcEENS0_9allocatorIcEEEEPFvPvS8_bblES9_b which demangles to FileFunctions::ReadAndProcessDirectory(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, void (*)(void*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, bool, long), void*, bool) So either there is a bum header which has allowed avtUGRIDFileFormat.C to reference that symbol, or there is a compiler mismatch. On my system, 2.10.2 PluginVSInstall.cmake gives compiler version as: # i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) While the system (High Sierra) /usr/bin/clang reports: Apple LLVM version 9.0.0 (clang-900.0.38) Target: x86_64-apple-darwin17.0.0 Adding in more explicit casts to the call of ReadAndProcessDirectory does not help. This appears to go back to compiler versions. My Xcode toolchain includes this typedef: typedef basic_string<char, char_traits<char>, allocator<char> > string; Which becomes part of the mangled object name, as opposed to the old code which just names std::string. username_0: At this point, the work-arounds are: - wait until the VisIt folks release a version compiled with a more recent clang, or - compile VisIt from source. username_0: Adding -DCMAKE_CXX_FLAGS="-stdlib=libstdc++" to the cmake invocation yields the same error as when using homebrew g++: Undefined symbols for architecture x86_64: "vtkDataSetAlgorithm::SetInputData(vtkDataSet*)", referenced from: MeshInfo::ZoneToNode2D(vtkDataArray*, vtkUnstructuredGrid*) in avtUGRIDFileFormat.C.o "vtkDataSetAlgorithm::GetOutput()", referenced from: MeshInfo::ZoneToNode2D(vtkDataArray*, vtkUnstructuredGrid*) in avtUGRIDFileFormat.C.o But that appears to be define with a compatible mangled name in `libvtkCommonExecutionModel` It's possible the GNU compile could be fixed by adding a dependency on libvtkCommonExecutionModel. username_0: MacOS High Sierra, VisIt 2.10.2 This is a work-around for the compiler version issues, by using libstdc++ from gcc. Invoke cmake as cmake -DCMAKE_CXX_FLAGS="-stdlib=libstdc++" . That causes clang to choose the g++ standard c++ library rather than llvm's. I am not sure whether this is related, or a separate bug, but this was also necessary to get it compile: In ugrid.xml, add vtkCommonExecutionModel-6.1 as an extra library. This fixes a missing symbol related vtkDataSetAlgorithm::SetInputData. <LIBS> ${NETCDF_LIB} vtkCommonExecutionModel-6.1 </LIBS>
jmc2obj/j-mc-2-obj
609268372
Title: Dried Kelp filled with water Question: username_0: I'm not sure if this is the intended behavior or not, but dried kelp is filled with water. This produces some flickering and other problems when rendering since two surfaces are in the exact same place. In the config file the dried kelp looks just like any other block just like it should be. I cut off one side of a 2x2x2 cube here that illustrates what i mean: ![image](https://user-images.githubusercontent.com/64552495/80632022-ec082a00-8a56-11ea-939c-5260d74677b2.png) Answers: username_0: The same issue seems to happen with the dead coral blocks too. username_1: This is happening because of some fixed rules put in place when they introduced the 'waterlogged' state. The problem is Minecraft isn't storing the waterlogged state for these blocks so there needs to be a more robust way of handling this. https://github.com/jmc2obj/j-mc-2-obj/blob/78db8562141f41df6ad2260cde31fd1c8389f398/src/org/jmc/Chunk.java#L292-L294 Status: Issue closed
Vagabottos/Oath.Vision
855482811
Title: Mis shown card Question: username_0: 030200000210Empire and Exile00000101234510C84CED0F0D03EC0609A6E70219BA1C13AE6FEB0313E4FF2EFFFFFF1DFFFFFF30D6BF15612B31578B51BEB1D40634A22F8443BC83A585D51DD3D25D2644017B0F9B77429990AA8135B23B108D532C3880066A93BD4052330FDFE6E5E1DEEAE2DCDADBDDE8E9E3E0 Shows secret police but should be forced labor
stellar/stellar-protocol
271135791
Title: Proposal: BIP32 (Hierarchical Deterministic Wallet) Support Question: username_0: ## Preamble ``` SEP: Not assigned Title: Stellar BIP32 (Hierarchical Deterministic Wallet) Support Author: username_0 Status: Draft Created: 2017-11-03 ``` ## Abstract Defines a convention for using BIP32 paths to generate Stellar accounts. This allows users of wallets that follow BIP39 (Mnemonic code for generating deterministic keys) to use their same wallet for managing their Stellar accounts. This is especially relevant to hardware wallets such as the Ledger Nano and Trezor. ## Specification BIP44 (https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) defines a BIP32 path for multiple coin types: ```text m / purpose' / coin_type' / account' / change / address_index ``` This specification follows BIP44 to build the initial path: ```text m/44'/ ``` The `coin_type` is `148'` as defined by Satoshi Labs in SLIP-0044 (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) ```text m/44'/148'/ ``` This specification now diverges from BIP44 since the remaining fields are not applicable to Stellar. Instead, the fields are defined as follows: ```text m / 44' / 148' / primary_asset' / account' ``` #### Primary Asset This field defaults to `0'` for XLM but can be set to one of the following values for users that wish to store assets in separate addresses. Using a different value for this field is optional and exists to help users organize their accounts. Storing all assets in the default account should be supported by all applications. BIP32 Index | Asset | Description -----|------|------------ [Truncated] ```text m/44'/148'/0'/0' ``` The key at this path would determine their Stellar public address. For most users, this would be all they need and multiple Assets could be stored in this account. **User with multiple Assets** A user desiring to separate their assets into multiple accounts may use a path like: ```text m/44'/148/1'/0' ``` This would generate the first account for asset `1'` (see Primary Asset section). Answers: username_1: I think it's a good idea to have this discussion now and try to come to a consensus. Indeed, it encouraged me to take a closer look at how to properly use the bip32 paths following bip44. A couple of things. While your scheme reuses the bip44 purpose field (44') it does not actually follow the bip44 scheme. Bip43 states: "We encourage different schemes to apply for assigning a separate BIP number and use the same number for purpose field, so addresses won't be generated from overlapping BIP32 spaces." So I think if we want to invent our own scheme we need our own bip too. Summerizing your proposal, compared to bip44 it inserts a field for the network between the purpose field and the coin type field and removes the change and index fields. Instead of: m / purpose' / coin type' / account' / change / address_index we would have m / purpose' / network' / coin type' / account' Two observations about removing the change and address_index fields. I can't determine whether bip44 regards these fields as optional or not. However that may be, they indeed do not apply to Stellar and it would be better to not have to deal with them at all. Moreover the problem with the Ledger Nano S is that (quoting one of the Ledger devs) 'the way public key is derived from private one in EDDSA does not allow not hardened derivation of the ecdsa key'. You will notice that at both the change and address_index level public derivation is used (non-hardened). So it seems it is not even possible right now to follow bip44 exactly in this regard. In that case, better just ignore these fields. About the network field, we need to think about how useful this would be as coin type is already a unique identifier. The one use I can think of is that it allows a Stellar-specific app to lock on the Stellar path 44'/148' while allowing derivation for all Stellar-based assets that wish to register their own coin type without requiring a new release. I'll explain. For instance, Ledger requires you to specify one or more bip32 paths when you load an app onto the device. The app is then 'locked' for that path, meaning attempts to derive from a different path are rejected. Note that this is not a full path, but a path prefix. So for the Ledger app I use 44'/148'. But here appears another difference between Stellar and the type of network the authors of bip44 had in mind. On most other networks accounts are free so there is less incentive to use the same Ethereum address for all your Ethereum-based assets than is the case with Stellar. I'm wondering if we couldn't just assume people will want to use the same address for all their stellar-based assets. Probably the vast majority of people on the Ethereum network use the same address for all their Ethereum-based coins. And I see the Ethereum Ledger app simply locks on the ether and ether classic coin types and so assumes the same. So the question remains is, do we want to / need to apply for a bip to register our own scheme or do we re-use / abuse / re-interpret bip44 for our own purpose, meaning: 1. don't use change and address_index fields 2. use coin type 148 for all Stellar-based assets username_0: After reading the Ethereum thread above, I'm leaning towards "reinterpreting" BIP44 and sticking with `m/44'/148'/` since that seems to be existing convention and a lot easier than registering a new purpose code. username_1: Someone on that thread commented aptly that even for Ethereum separate accounts for different coins poses a problem since to use it you need native currency in your account to pay network fees. So, indeed, it's not only the activation of the account that plays a role. It seems entirely unnecessary to reserve a dedicated field in order to allow users to have separate accounts for different coins. Users have 2^32 different accounts at their disposal by using the account field as they see fit. They can use 44'/148'/0' for their lumens, 44'/148'/1' for their mobi, etc. if they want. Reserving a separate field for coin type would suggest each coin necessarily be on a separate branch, which we seem to agree is not the case at all. I love being pragmatic with these kinds of things. It is true that this proposal is not entirely compliant with BIP44 while using the BIP44 purpose. But as modern linguists agree, usage trumps formal definition and in practice we've seen 44' to mean rather something like 'here be crypto currency' instead of the technically exact definition outlined in the BIP44 proposal. For instance, Ethereum wallets have been using 44'/60'/account'/index, leaving out the change field. It's not like wallet developers that want to support different coins would benefit if we'd follow the specification exactly. Networks are so different they need custom handling anyway. I agree with you that to take the path Ethereum developers are taking now of registering their own purpose seems cumbersome and a long winded process. We need a formal document that defines a standard for *Stellar* so that *Stellar* wallets can be interoperable in practice. We don't need advise or approval from bitcoin devs or wider crypto-currency community to achieve that. If there is a good argument why we *should* take that route I *would* like hear it. It hasn't occurred to me yet. username_0: Good catch, I missed that. I agree with the rest of the points you made and think we should go with `m/44'/148'/ account'` as the path and get rid of the `primary_asset` field. Unless anyone else has comments, I'll update the specification and work on adding in some of the test vectors from https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki#Test_Vectors so we can verify both our implementations are generating the expected addresses. username_2: I spent some time exploring BIP-[32, 39, 44] while working on https://github.com/stellar/go/pull/81 and https://github.com/tyler-smith/go-bip32/pull/7. First of all, you call your proposal BIP32 but it's actually a mirror of BIP-44 (95% the same). Correct me if I'm wrong but I see no point of copying the full doc here. The key to BIP-44 is to use the same protocol for each coin so wallet developers don't have to search for per-coin specification. For example, BIP-44 is also not 100% applicable for Ethereum because of it's nature and all of the Ethereum wallets I've used just used main key (which is `m/44'/60'/0'/0`). For example, in Ledger Ethereum app you simply use one account for all your operations. I think we should do the same for Stellar. I see your point when it comes to using `primary_asset` segment in the derivation path but given that in Stellar each account needs to have a minimum balance of, currently, 20 XLM plus 10 XLM per each subentry I don't think users will want to use this feature, they are more likely to have all their assets in a single account. When it comes to BIP-32 for Stellar the main problem is we're using different elliptic curve: `curve25519` instead of `secp256k1` used in Bitcoin and Ethereum. During my research some time ago I found two papers dealing with this problem: * 1st from chain.com which turns out to be insecure (https://chain.com/docs/1.1/protocol/specifications/chainkd) * 2nd from University of Luxemburg that proves that chain.com's solution is vulnerable to timing attack and other problems. They proposed their own solution: https://cardanolaunch.com/assets/Ed25519_BIP.pdf I couldn't find anything better so at this point I think there is no reliable method for deterministic key derivation for `curve25519`. Maybe @username_1 has more information? But because of this, you can't use the full magic of BIP-32 and BIP-39, in other words: you can generate mnemonic code and derive private keys for master private key but you can't do the same for master public key. Actually I created a proof-of-concept recently: https://github.com/stellar/go/pull/78 but obviously master public key -> child public key derivation doesn't work because of above. Maybe @stanford-scs will have some time in a near future to research it (of course if my findings are correct). username_0: Thanks a lot for this reference, are you aware of any implementations of the key generation scheme in the University of Luxemburg paper? The main motivation for submitting this proposal was so that the two hardware wallet projects (mine and @username_1 's) ensure they generate the same key from the same input. A user should be able to recover their account on a Trezor if they started with a Ledger Nano and vice-versa. These devices are heavily focused on Bitcoin, so maybe we're taking the wrong approach by trying to force their existing functionality into Stellar. I think what matters from the user's perspective is that if they take the mnemonic they used on their Nano and use it on a Trezor they should get the same Stellar account out. @username_2 - Do you think it would be a better idea to try to come up with a BIP-39 style way of converting a passphrase to a Stellar private seed and not worry about hierarchical addresses since they're not important to Stellar? I think that might actually simplify the Trezor implementation and think it sounds like a good idea since no one has come up with a use case for multiple accounts (and several against). username_1: Well, I for one would certainly like to have multiple accounts at my disposal when using my Ledger, and I suspect a lot more people find that an indispensable requirement. From my point of view using BIP32 with a subset of BIP44 (`m / 44' / 148' / x'`) is simply the most pragmatic way to go about this. It is the only BIP32 path is formally reserved for Stellar in a proposal that is recognised by the larger crypto community. This gives you two advantages: 1) People will be dealing with a familiar pattern instead of coming across yet another way of doing things. 2) No threat of BIP32 path clashes with other usages. username_2: I think we should still follow BIP-44, check my motivation below. --- What do you think about this (we can write SEP including all of these): * We strictly follow BIP-39 when it comes to mnemonic code. This generates master private key. * We strictly follow BIP-44: * We generate the _main_ private key using `m/ 44'/148'/0'/0/0` path as in BIP-32. To generate a public key you need to `curve25519` private -> public key derivation (or simply one of our SDKs). This should be enough for most of the users. * If your app supports it, you can generate more private keys using `m/ 44'/148'/x'/0/y` paths. * This way we will be able to achieve all the [use cases](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki#Use_cases) described in BIP-32 doc when `master public key -> child public` works for `curve25519`. * We should make it clear that `master public key -> child public` key derivation does not work. I think this could be a good start and we can always update the doc if there are new developments connected to "BIP-32 for `curve25519`". username_2: OK, @username_1 I agree with you. We should use `m/44'/148'/x'` and `m/44'/148'/0'` as a _main_ key in Stellar. username_1: Great! It's a good starting point. I think this proposal still enables us to improve upon later without compatibility issues once public-to-public derivation becomes possible and allows support for the more advanced use cases you mentioned. username_0: I like this idea too, any thoughts on whether mnemonic to private key be a separate standard or would you like me to add it to this one? I am slightly leaning towards making it a separate standard since it's a separate BIP and not strictly necessary for how we're using BIP44. username_0: @username_2 - I've been looking into how BIP39 (mnemonic codes) would fit in with this and I think it may impact the path derivation. From what I can tell from the Ledger's docs, each "app" running on it only gets access to a BIP32 path that it has to request. This means for maximum compatibility we have to go with how they're deriving the master key. If they're doing it like other BIP39 generators (I've tested a javascript one, a PHP one, and looked at Trezor's implementation) then the algorithm looks something like this: ``` uint8_t hash[64]; // stores the final 64 bytes of the sha512 hash HMAC_SHA512_CTX ctx; sha512_init(&ctx, 'Bitcoin seed'); // this value is hardcoded sha512_update(&ctx, seed); // "seed" is the raw bytes generated from the mnemonic sha512_Final(&ctx, hash); ``` I think this will impact your Go PR since you'll need to include the hardcoded "Bitcoin seed" string as part of the data you're hashing to arrive at the same master key as the Ledger and Trezor. **Mnemonic test** mnemonic string, tested with https://iancoleman.io/bip39/ : `athlete sample issue bulk horn hundred two bike element scheme humble drink shy buzz advance blouse now present episode canoe pupil laugh lemon net` Using the above, I get a bip32 root key of `<KEY>` Deriving `m/44'/148'/0'` (had to modify the source code since the link above doesn't have an option for 148') then gets me a key of `<KEY>` Decoding this key and passing the raw 32 bytes to `StellarSdk.Keypair.fromRawEd25519Seed(rawBytes)` gives me this keypair: ``` <KEY> <KEY> ``` @username_1 Can you confirm that the Ledger works this way and what keypair is generated given the above mnemonic? username_1: The master key derivation from the BIP39 binary seed as described in [SLIP-0010](https://github.com/satoshilabs/slips/blob/master/slip-0010.md) involves a different seed modifier for ed25519 ('ed25519 seed') than is used secp256k1 ('Bitcoin seed'). According to [this thread on reddit](https://www.reddit.com/r/ledgerwallet/comments/71tphi/is_there_a_javascript_implementation_of_os_perso/) Ledger is following SLIP-0010 on this so should be good. username_2: @username_0 the process will be slightly different in terms of key derivation (I finished working on this yesterday: https://github.com/stellar/go/pull/150). Mnemonic -> seed will be exactly the same as in [BIP-0039](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki). username_2: Let's continue discussion here: https://github.com/stellar/stellar-protocol/pull/63 Status: Issue closed
yazeed505/Instagram
819914764
Title: Project Feedback! Question: username_0: It looks like the following features are not reflected on your GIF walkthrough: - User sees app icon in home screen and styled launch screen. - User can sign up to create a new account. - The photo did not posted to the server. In order for us to count these towards your submission, please record another GIF that captures these features. Once you do, please push your updates and **submit your assignment again through the course portal within 48 hours from the posted deadline so that we can regrade it**.
drcjar/ipfjes-interview
260895594
Title: lets have a way of removing a participant from the list of particpants to call and recording the reason Question: username_0: e.g no longer wishes to take part, died, no response after 3 attempts, other (specify) Answers: username_1: So: Remove the `needs_interview` tag and add an additional data entry step (this will require a database schema update obv.) - sounds fairly sensible. If the list is too cluttered short term you can archive the tag in the admin to make them no longer appear on the list... Status: Issue closed
openshift/origin
241832410
Title: oauthclientauthorizations do not check the client's UID Question: username_0: We do not honor oauthclientauthorizations whose user UID does not match. However, we do not track the client's UID so an oauthclientauthorization will match a client that has been deleted and recreated. Answers: username_0: /lifecycle frozen username_0: /unassign @stlaz @sttts @mfojtik
juice49/next-og-image
894691953
Title: Add support for non AWS Lambda environments Question: username_0: We depend on the [chrome-aws-lambda](https://github.com/alixaxel/chrome-aws-lambda/wiki/HOWTO:-Local-Development) package, which does not currently support environments beyond AWS Lambda. As reported in #3, this also means local development doesn't work. We should investigate whether it's possible to support non AWS Lambda environments, and at least document the issue in the readme.<issue_closed> Status: Issue closed
yuforest/property-exem
328752994
Title: scaffoldのデフォルトコメントは削除する Question: username_0: https://github.com/username_1/property-exem/blob/35042bb39aada7cd28a5075dae1e648eeea514fc/app/controllers/nearest_stations_controller.rb#L4 Answers: username_0: @harada4atsushi 他にもありますのでお願いします。 username_0: @username_1 他にもありますので修正しましょう。 Status: Issue closed username_1: https://github.com/username_1/property-exem/blob/35042bb39aada7cd28a5075dae1e648eeea514fc/app/controllers/nearest_stations_controller.rb#L4 Status: Issue closed
monero-project/monero-site
269970616
Title: Condense Kovri articles into one to declutter Moneropedia Question: username_0: Even though Kovri is going to be integrated into Monero in the future, I feel like having a lot of stub articles about Kovri topics in the Moneropedia is detrimental to it's function as instructional for new users. I'm not as familiar with I2P or Kovri, so I don't feel equipped to make the change myself. I think a lot of this information can be moved copied over to monero-project/kovri-site eventually, but for now I think we should consolidate it into one article. (Another possibility I've considered is to expand Moneropedia to include categories, and categorize these in a way that it's easier to navigate Monero specific topics seperate from Kovri topics.) I believe these are all the articles that are currently Kovri related: [Base32-Address](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/base32-address.md) [Base64-Address](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/base64-address.md) [Canonically Unique Host](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/canonically-unique-host.md) [Clearnet](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/clearnet.md) [Destination](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/destination.md) [Eeepsite](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/eepsite.md) [Floodfill](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/floodfill.md) [Garlic Encryption](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/garlic-encryption.md) [Garlic Routing](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/garlic-routing.md) [i2np](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/i2np.md) [i2p](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/i2p.md) [i2pcontrol](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/i2pcontrol.md) [java-i2p](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/java-i2p.md) [Jump Service](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/jump-service.md) [Kovri](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/kovri.md) [Lease](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/lease.md) [Network Database](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/network-database.md) [Router Info](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/router-info.md) [SSU](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/ssu.md) [Subscription](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/subscription.md) [Transports](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/transports.md) [Tunnel](https://github.com/monero-project/monero-site/blob/master/resources/moneropedia/tunnel.md) +moneropedia +enhancement Answers: username_1: Bad idea to merge into a single article. Good idea to have top-level categories. username_0: I kinda agree with you, but with Kovri having it's own site, I think we don't have to have individual articles for things that could be categorized under a particular article. Wikipedia does this somewhat and uses section headers for each bit of information, so it's still possible to link directly to a particular subject in the article. The articles should have magnet links to jump to each section. For instance: Kovri *Top level information of what Kovri is and does. Explaining the difference between it and I2P. (C++ vs Java, etc).* _I2P (subsections explaining each of these things specifically how their implemented in Kovri) __Tunnel __Transport __Jump Service __Subscription __Garlic Encryption __Garlic Routing __EEP Site username_2: I agree that many of those terms may create confusion, but I also agree that merge everything in one article is not optimal and dispersive. What about make a 'Kovripedia' on getkovri where pull this terms? IMO they are enough and they will increase with time, it would be needed in future anyway username_0: +moneropedia +enhancement @danrmiller Any idea why the bot didn't pick up the labels from the OP? username_1: I think monerpedia could use more work and be given more exposure once that works done. I'm agreeing with all the issues here. username_2: This issue is old and was discussed and closed on gitlab. Please reopen if you feel it's still relevant. Status: Issue closed
KATO-Hiro/Daily-hit
707317705
Title: 「VS CodeでFortranをデバッグ設定までやってみる。【Win10】」を試す Question: username_0: # サービス名 ## サービスの概要 + ## モチベーション + ## キーワード + ## 難易度 + ## 提供手段 + ## 集客方法 + ## マネタイズ + ## 競合・類似サービス + ## 差別化ポイント + ## 備考 + https://qiita.com/emushi5/items/48a0a837fd72519c68c4
ESIPFed/esiphub-dev
335182757
Title: Data for workshop Question: username_0: @username_1 has about 20GB of data that he would like students in the CDI workshop to be able to access. He has loaded it to google drive and shared with my USGS google account. I'm currently copying it to AWS S3 using this rclone command: ``` rclone sync gdrive-usgs:imageclass aws:cdi-workshop --checksum --fast-list --transfers 16 & ``` Answers: username_1: FYI, the final data set will be in the region 30 GB username_0: The above rclone command *syncs* google drive with s3, so that will be no problem. Here are two ways to read image data from s3: https://gist.github.com/username_0/c4e7650d5f94c00d6a7b7cd67acf2ab9 Will one of these work? If not, another approach might be to try https://github.com/s3fs-fuse/s3fs-fuse username_1: The boto3 approach didn't work because it needs a credentials file. Option 2 with s3fs worked - thanks! username_0: Cool. So I'm trying to make sure we have all the data from Google Drive copied to S3. I think you have 564277 files on Google Drive: ``` $ rclone ls gdrive-usgs:imageclass | wc -l 564277 ``` but right now there are only 133022 files on S3: ``` $ rclone ls aws:cdi-workshop | wc -l 133022 ``` I think this means my rclone command is not syncing as it should be, but restarting the transfer every time I restart it. It's running now, but when it quits, I'll do a little more in-depth sleuthing to figure out what is going on. username_0: Ah, I'm getting this error: ``` [1]+ Running rclone sync gdrive-usgs:imageclass aws:cdi-workshop --checksum --fast-list --transfers 16 & (IOOS3) [rsignell@sand ~]$ 2018/06/29 16:36:26 ERROR : semseg_data/ccr/all/tile_96/road/PO2DOF.jpg: Failed to copy: failed to open source object: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded ``` off to explore why username_1: I could push from my own machine - just tell me how. That might be better in the longer term anyway in case I need to add even more files username_0: I ran the sync command again, and after dying again with the quota limit, it's at: ``` (IOOS3) [rsignell@sand ~]$ rclone size aws:cdi-workshop Total objects: 514165 Total size: 29.329 GBytes (31492012782 Bytes) ``` username_0: I fired off the rclone sync command and it finished with no errors this time. Here's the size: ``` (IOOS3) [rsignell@sand ~]$ rclone size aws:cdi-workshop Total objects: 564328 Total size: 29.459 GBytes (31631116199 Bytes) ``` Does that look complete? Did you add `564328 - 564277 = 51` new files in the last two days? username_1: Yes, 51 new files. Total size should be 33.8 GB. I've looked through and doesn't seem to be anything major missing. Is there some way I can do a diff between aws:cdi-workshop and gdrive-usgs:imageclass to see what's missing? username_0: This is what rclone is telling me is on google drive: ``` (IOOS3) [rsignell@sand ~]$ rclone size gdrive-usgs:imageclass Total objects: 564328 Total size: 29.459 GBytes (31631116199 Bytes) ``` username_1: Apologies, yes you are correct. 29.459 GB. I was counting another folder as well username_0: The pangeo.esipfed.org cluster is now running in `us-east-1` instead of `us=west-2`. Okay if I move the data to `us-east-1`? username_1: sorry for late reply. Yes! btw, I'm currently unable to start a server at http://pangeo.esipfed.org/hub/user/username_1/ username_0: @username_1 I was having some problems with the cluster/Jupyterhub, but I think they are fixed. Try again? username_1: @username_0 Looks like it's working now. Thanks username_0: @username_1 , I've copied the data to `us-east-1` because http://pangeo.esipfed.org is now running on `us-east-1` and this way we won't be moving data across regions (which should make things cheaper and faster). The new S3 location on `us-east-1` is: `esipfed.cdi-workshop`. The data still exists on `us-west-2` on `cdi-workshop`, but it would be best to change the notebooks to the former for the reasons above. username_1: Makes sense but I still don't know how to actually make sure I'm not pulling data from ```us-west-2```. For example, I make use of the ```s3fs``` utility like so: ``` import s3fs fs = s3fs.S3FileSystem(anon=True) with fs.open('cdi-workshop/imrecog_data/NWPU-RESISC45/test/airplane/airplane_700.jpg', 'rb') as f: image = color.rgb2gray(imread(f, 'jpg')) ``` How would I modify that? username_0: Just add `esipfed/` to the bucket name like this: ``` import s3fs fs = s3fs.S3FileSystem(anon=True) with fs.open('esipfed/cdi-workshop/imrecog_data/NWPU-RESISC45/test/airplane/airplane_700.jpg', 'rb') as f: image = color.rgb2gray(imread(f, 'jpg')) ``` username_0: I'm trying the sync command again. Some files/permissions must not have transferred correctly. username_1: Thanks! username_0: Getting closer... ``` [ec2-user@ip-172-31-29-161 ~]$ rclone size s3-west:cdi-workshop Total objects: 564328 Total size: 29.459 GBytes (31631116199 Bytes) [ec2-user@ip-172-31-29-161 ~]$ rclone size s3-east:esipfed/cdi-workshop Total objects: 441391 Total size: 26.153 GBytes (28081266362 Bytes) ``` Still running.... username_0: @username_1, should be good to go! Please try again! ``` [ec2-user@ip-172-31-29-161 ~]$ rclone size s3-west:cdi-workshop Total objects: 564328 Total size: 29.459 GBytes (31631116199 Bytes) ``` ``` [ec2-user@ip-172-31-29-161 ~]$ rclone size s3-east:esipfed/cdi-workshop Total objects: 564328 Total size: 29.459 GBytes (31631116199 Bytes) ``` username_1: Great - thanks! username_1: Looks like all the images are now there, but I'm still getting permissions problems. Looks like everything in subfolders of ```/imrecog_data/EuroSAT``` are off-limits username_0: @username_1 I set the `esipfed` bucket property to public read, so I think that should make everything in the entire bucket readable. Can you please try again? username_1: Thanks @username_0, I can now read everything Status: Issue closed
SwissDataScienceCenter/renku-graph
466402824
Title: provide Dataset information from the graph Question: username_0: The graph query endpoint should return information about `Dataset`s on the level of a project as well as for the whole instance. Answers: username_1: This is what we should receive when we query for a projects dataset, please correct me if i'm wrong on something. datasets: [ id: string name: string description: (i get now an html inside a string and i like it but it could also be a string) authors: [ {name:"", email:"", institution:"", id:"" } ] creator: (string) ==> this is the renku user that created the dataset files: [ {name:"", size:"", (if possible) id:"", date_created:"", date_added:"", author: creator: project: ==> project it first belonged to } ] date_created: (string with date the dataset was created in general) date_added: (string with the date the dataset was added to renku) projects: { list of projects that imported the same dataset and also project that "owns" the dataset } ] username_1: I think that dataset schema is also for the general search of datasets, the difference in that case is that we want to be able to search for dataset fields, filter them, etc... username_2: Thanks for that Virginia! The only question I have is whether all the fields you mentioned is what you want to show according to a relevant UI issue? I'm asking as I'd like this story to be addressing what's really needed and don't want to implement things which might be used in the future. We can always have another story and work on that iteratively. Similarly, in regards to the filters, it'd be great if you could tell me what's needed now. Thanks username_1: At the moment I have as WIP this two prs: https://github.com/SwissDataScienceCenter/renku-ui/pull/562 https://github.com/SwissDataScienceCenter/renku-ui/pull/555 I'm trying to display dataset info from a specific project. They are reading the data from the datasets yaml files and they should be getting it from the graph. The moment I can get data from the backend I can push them. I think it would be usefull that @username_3 check if the fields i think we should be getting are what we really need. This would be the first thing we need at the moment. username_2: Right, it would be great if you @username_3 could verify that. In terms of the filters, do you think that's something we need from that issue or it would be better to create another one? username_1: I think filters should be implemented on a separate issue. The filters are for the dataset search. For this we need to make a new tab where we list all the datasets in renku (like we do with the projects in this tab ==> https://renkulab.io/projects) username_2: Good, there's an issue for the datasets search #117 so we should be ok. I'll try to add more details to it. username_1: Would it be hard to implement to choose wether we get a list with the full content of the datasets or part of it (like only id and name). I think at the moment the projects don't have so many datasets but it could happen that if the projects have lots of datasets we will have to implement some lazy loading mechanism. username_2: Actually, we don't have to do anything as if you'd pass me a query like this: ``` { "query": "{ dataSets(projectPath: \"namespace/project\") { name } }" } ``` you'll get: ``` { "data": { "dataSets": [ { "name" : "x" }, { "name" : "xr" }, { "name" : "chOorWhraw" } ] } } ``` So you decide what fields you want the resource to return to you back by listing them in the body of the query. username_1: Perfect! username_3: We will need all the dataset metadata fields available in the UI. We want to add the ability to edit metadata soon, and that is a prerequisite for that. We also need information about the project where a dataset was created and where it is imported, since that is the next view we want to implement. I would say that we will need these in the initial version: ``` datasets: [ id: string name: string description: string/html authors: [{ name:"", email:"", institution:"", id:"" }] creator: { # this is the renku user that created the dataset userid: id, username: string } files: [{ name:"", size:"", (if possible) id:"", date_created:"", date_added:"", }] date_created: (string with date the dataset was created in general) date_added: (string with the date the dataset was added to renku) owner_project: project (the project where this dataset is defined) using_projects: [ # list of projects that import this dataset ] ] ``` And these can wait: ``` datasets: [ files: [{ author: creator: project: ==> project it first belonged to projects: ==> list of projects using the file }] ] ``` username_2: Many thanks @username_3 for all the details. In regards to the `creator.userid` and `creator.username` would it be fine for you if we use user's email and name, I mean `user@host` and `first-name second-name` for these fields? username_0: We should stay as close as possible to the definitions from schema.org username_0: In fact, you shouldn't need to change what you get back from the sparql query apart from stripping the prefix. Ideally we would be returning valid json-ld but that's a separate discussion. username_2: Agreed, `json-ld` is probably a separate discussion. My question was more about if UI is happy to use email as `userid` and `first-name second-name` as `username`. username_0: The UI probably wants gitlab user ID - and this would require us to include gitlab-specific Metadata in the KG, which we haven't done up to this point. Or the graphql backend should make a request to gitlab, but the users endpoint requires an admin token iirc. username_2: I can obtain an Access Token for a project from the `token-repository` service and try to use it for fetching GitLab id. username_3: I think the gitlab user ID would be nice to have, since it would make it easy to provide more information about the user in the UI, but I also understand the desire to keep the schema valid. Let us start with the information that is used to identify the user in the schema, and return to the gitlab user later. username_2: After pretty long hours spent on both assessing the data available in the Knowledge Graph and possibilities giving by REST and GraphQL, I ended up with two approaches. 1. There are two REST endpoints on the knowledge-graph service returning JSON in HATEOAS format (more on that below): * `GET /knowledge-graph/projects/:project-path/data-sets` response example: ``` [ { "identifier":"9f94add6-6d68-4cf4-91d9-4ba9e6b7dc4c", "name":"rmDaYfpehl", "_links":[ { "rel":"details", "href":"http://t:5511/data-sets/9f94add6-6d68-4cf4-91d9-4ba9e6b7dc4c" } ] }, { "identifier":"a1b1cb86-c664-4250-a1e3-578a8a22dcbb", "name":"a", "_links":[ { "rel":"details", "href":"http://t:5511/data-sets/a1b1cb86-c664-4250-a1e3-578a8a22dcbb" } ] } ] ``` * `GET /knowledge-graph/data-sets/:id` response example: ``` { "_links" : [ { "rel" : "self", "href" : "https://zemdgsw:9540/data-sets/6f622603-2129-4058-ad29-3ff927481461" } ], "identifier" : "6f622603-2129-4058-ad29-3ff927481461", "name" : "data-set name", "description" : "vbnqyyjmbiBQpubavGpxlconuqj", // optional property "created" : { "dateCreated" : "1970-05-12T06:06:41.448Z", "agent" : { "email" : "<EMAIL>", "name" : "<NAME>" } }, "published" : { "datePublished" : "2012-10-14T03:02:25.639Z", // optional property "creator" : [ { "name" : "e wmtnxmcguz" }, { "name" : "<NAME>", "email" : "<EMAIL>" // optional property } ] [Truncated] "dateCreated": "2001-09-05T10:38:29.457Z" }, { "name": "file2" "atLocation"": "data/chOorWhraw-name/file2", "dateCreated": "2001-09-05T10:48:29.457Z" }], "isPartOf": [{ "name": "namespace/project1" }, { "name": "namespace/project2" }] } ] } } ``` **NOTES/ASSUMPTIONS** * I imagine you can find some of the property names a bit not very precise or misleading. This is because we tried to keep the `schema.org` and `prov` schemas so we don't have to do mental mapping and we can start serving `json-ld` responses at some point. * We cannot serve some of the properties mentioned in the comments as we simply don't have the data in our KG. We can think what's essential and I can try to enrich the responses by querying other sources like GitLab. I'll do that anyway for things like users and projects anyway so UI and other clients have a consistent, single API. username_3: Small, but important comment -- please use "dataset(s)" as the term for this entity. I myself prefer the spelling with a space, but we have standardized around dataset (one word) in Renku, so we should stick to that. More substantial comments to come. username_0: What I like about the HATEOAS approach is that it can leverage the fact that we have a linked-data graph. You wouldn't even need to return e.g. info about the creator - you could just return ids. The other thing that is nice about the REST response above is that all that is needed to turn it into a json-ld object is the inclusion of the `@context` at the top. Then any of our clients who are designed to work with json-ld can easily use this data as well. Just one thing I noticed - you have: ``` "created" : { "dateCreated" : "1970-05-12T06:06:41.448Z", "agent" : { "email" : "n@ulQdsXl", "name" : "<NAME>" } }, "published" : { "datePublished" : "2012-10-14T03:02:25.639Z", // optional property "creator" : [ { "name" : "<NAME>" }, { "name" : "<NAME>", "email" : "<EMAIL>" // optional property } ] }, ``` The `agent` under `dateCreated` should be under a list of `creator` - that's how we handle/represent creator(s) elsewhere (e.g. like you have under `published`) username_3: Sorry for the delay. I wrote up a response, but I must have forgotten to push the comment button. I agree with all of your points in favor of the REST approach, but my concern, and the reason GraphQL is appealing, is that it allows for more efficient communication between the UI and backend. This becomes more important as the amount of metadata per entity increases. On the UI, we will show an overview page for datasets, were we just want a small amount of information (name, author, description, creation date). Consider a project with 5 datasets. for this project, we will need to make 6 calls to the server (one to get the list of datasets and 5 to get the other information), discarding most of the information that is returned. With the REST-only solution, how do we plan on optimizing the responses and client/server communication? username_2: I see your point. What we can maybe do is to go for a kind of middle ground and agree what's the set of properties you would need when querying for the list of project's data-sets? In other words, I'd return you all name, author name, description and creation date for each data-set along with the links. So you get a complete set of data for a single page and then you follow the links when you need. I'm aware this solution would make us more coupled but as long as you would need just a reasonable small subset of the properties, we still should be better in the terms of performance and maybe in the future some other users of the API could take advantage of that. The whole KG API is I think a bit of an experiment at this point anyway, as I guess we're not really sure what kind of data we'll need (at least I don't know). I'm just getting worried after the data-set endpoint, that the data model we'd be returning from the GraphQL endpoints will either grow too big or we'll end up with many resources which the clients would have to know about while with the HATEOAS they could simply follow just by reading the responses. username_3: I think we can start with the REST endpoint, since it will make sense to have that anyway. We can revisit GraphQL in the future. username_2: Yes, of course. I'll leave the GraphQL resources for the time being anyway. In terms of the `GET /knowledge-graph/projects/:project-path/datasets` endpoint. Are you fine with having just the `name`, `identifier` and link to the details for each dataset for now? username_3: Let us do it that way. It means that we will have to fetch datasets and all of the details for the first page of datasets, which is not optimal for the UI, but I think it is the best tradeoff because it does not make sense for the `datasets` endpoint to provide all the information we actually need. Status: Issue closed
UCSF-Costello-Lab/LG3_Pipeline
365656111
Title: HIGH PRIORITY: Allow for updating the genome (FASTA) reference Question: username_0: # Background Ziv lab needs to switch the genome reference file for their needs. The first step this comes in to the pipeline is the BWA alignment step. # Task(s) Make it possible to change the genome reference FASTA file for the alignment step. After that, look at the remaining steps and what other reference files that needs to be updated as well. Answers: username_1: What genome do they need? username_0: NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN ``` (the filename does _not_ reflect GRCh37 but the file content does) username_1: I am glad it is not GRCh38... :) The easiest way is to rename chromosomes in the reference fasta file, and rebuild bwa indexes. Otherwise we will have to rename chromosomes in all annotations.. username_0: Ok, thxs. We decided to stick with the current hg19 (the unknowns in this pipelines are too many and the pay off might be zero) - the rationale for using GRCh37 is not really there.
apple/cups
141447170
Title: Add http write buffer for improved performance Question: username_0: Version: 1.2-feature CUPS.org User: jlovell Adding a small buffer to http writes provides a dramatic performance increase. Before applying the attached patch testspeed reports numbers like: testspeed(2801): 100 requests in 1.3s (0.013s/r, 74.1r/s) testspeed: 5x100=500 requests in 1.0s (0.002s/r, 500.0r/s) After the patch I'm seeing 2 to 3 times the throughput: testspeed(3519): 100 requests in 0.4s (0.004s/r, 230.7r/s) testspeed: 5x100=500 requests in 1.0s (0.002s/r, 500.0r/s) The important part of this to flush the buffer in all the right places; please check to make sure I got them. In particular I didn't try any TLS upgrades. I'd be curious to know if the results on others systems match MacOSX. Comments? Thanks! Answers: username_1: Hi Michael, would it be possible to get the http_buffered_write.patch file for download? I've got an old 1.1.23 installation, updating of this system is PITA, but i would like to apply this patch. Thx, Johannes username_0: I've fixed the formatting of the patch, but I'm not sure it will apply cleanly to 1.1.23. username_1: Great, thx for your support! A few line modifications and it worked!
medikoo/es5-ext
303150986
Title: Introduce (is|ensure)Thenable, update isPromise Question: username_0: `isThenable` should detect objects that expose `then` method. `isPromise` should be updated to detect _thenables_ of which constructor creates new promises up to _constructor revealing_ pattern. Still check should not try to create a promise, instead just checking for existence of `constructor.resolve` and `constructor.reject` should be fine Answers: username_0: Those utils are now served elegantly by [type](https://github.com/username_0/type#type) package, with small difference that `promise/is` confirms on native promise or it's extension. Status: Issue closed
zaproxy/zaproxy
1076017532
Title: Improve error handling when failing to launch browser from Manual Explore Question: username_0: **Is your feature request related to a problem? Please describe.** If the user doesn't have the selected browser installed at all, or installed in a way ZAP can't find it, the browser doesn't launch, but NO error indication is provided in the ZAP UI. **Describe the solution you'd like** When ZAP can't launch the selected browser, provide a dialog box that explains there was an error, with some details. This feature might have been useful to the user who reported issue #6905, for example, to help them self diagnose and fix the problem, rather than simply opening a bug report. **Additional context** As an example, when I just ran ZAP from the command line and tried to launch FireFox, I got the following error at the command line, but NOTHING in the ZAP UI. I'm hopeful that this error message can help me fix my problem. However, if I had launched ZAP as an installed app, I don't see any command line output, so this error would have been 'silent' to me, and thus not helpful. 178356 [ZAP-BrowserLauncher] ERROR org.zaproxy.zap.extension.quickstart.launch.ExtensionQuickStartLaunch - **Cannot find firefox binary in PATH**. Make sure firefox is installed. OS appears to be: WIN10 Build info: version: 'unknown', revision: 'unknown', time: 'unknown' System info: host: 'DYLPMR2', ip: '192.168.56.1', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_251' Driver info: driver.version: FirefoxDriver org.openqa.selenium.WebDriverException: Cannot find firefox binary in PATH. Make sure firefox is installed. OS appears to be: WIN10 Build info: version: 'unknown', revision: 'unknown', time: 'unknown' System info: host: 'DYLPMR2', ip: '192.168.56.1', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_251' Driver info: driver.version: FirefoxDriver at org.openqa.selenium.firefox.FirefoxBinary.<init>(FirefoxBinary.java:100) ~[?:?] at java.util.Optional.orElseGet(Optional.java:267) ~[?:1.8.0_251] at org.openqa.selenium.firefox.FirefoxOptions.getBinary(FirefoxOptions.java:216) ~[?:?] at org.openqa.selenium.firefox.FirefoxDriver.toExecutor(FirefoxDriver.java:187) ~[?:?] at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:147) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getWebDriverImpl(ExtensionSelenium.java:1009) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getWebDriver(ExtensionSelenium.java:868) ~[?:?] at org.zaproxy.zap.extension.selenium.internal.BuiltInSingleWebDriverProvider.getWebDriver(BuiltInSingleWebDriverProvider.java:63) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getWebDriverImpl(ExtensionSelenium.java:754) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getWebDriver(ExtensionSelenium.java:561) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getProxiedBrowser(ExtensionSelenium.java:710) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getProxiedBrowserByName(ExtensionSelenium.java:651) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getProxiedBrowserByName(ExtensionSelenium.java:627) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getProxiedBrowserByName(ExtensionSelenium.java:611) ~[?:?] at org.zaproxy.zap.extension.selenium.ExtensionSelenium.getProxiedBrowserByName(ExtensionSelenium.java:601) ~[?:?] at org.zaproxy.zap.extension.quickstart.launch.ExtensionQuickStartLaunch.lambda$launchBrowser$1(ExtensionQuickStartLaunch.java:209) ~[?:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_251]<issue_closed> Status: Issue closed
oborel/obo-relations
172489763
Title: NR: removes input for Question: username_0: counterpart for provides input for. Document TBA Answers: username_1: CC @ukemi username_2: @username_0 Following up on this ticket after the docathon. We discussed again the need for 'removes input for' relations wrt a model involving serotonin catabolism: http://noctua.berkeleybop.org/editor/graph/gomodel:59dc728000000287 Can you please add the following new relations to RO? causally upstream of (RO:0002411) -transitively removes input for (RO:new) --directly removes input for (RO:new) Possible definitions: directly removes input for: p1 directly removes input for p2 iff there exists some c such that p1 has_input c that, in turn, prevents the realization of p2 by altering the identity, or localization, of c The current definition of 'transitively provides input for' is: transitive form of directly_provides_input_for so, by analogy, the definition of 'transitively removes input for' would be: transitive form of directly_removes_input_for Do the transitive relations need to be defined more specifically? Also note that there is a comment associated with 'transitively provides input for': 'This is a grouping relation that should probably not be used in annotation. Consider instead the child relation 'directly provides input for' (which may later be relabeled simply to 'provides input for').' What are your current thoughts on relabeling the 'directly provides/removes input for' relations? Thx. username_3: @username_2 is this still needed? username_2: @username_3 - I'm not sure if we still need/want this relation for GO-CAMs. Since this ticket is fairly old, I think we could go ahead and close it and reopen it later if needed. Status: Issue closed username_3: Thanks, closing!
Funwayguy/BetterQuesting
429747718
Title: [1.7.10]crash on startup with RfExpansion Question: username_0: forge: 10.13.4.1614 BetterQuesting: BetterQuesting-3.0.297 StandardExpansion: StandardExpansion-3.0.155 RfExpansion: RFExpansion-deobf-3.0.29 With betterquesting and StandardExpansions seems to work, but If i add the rf expansion jar it crash before load the game crash report: https://pastebin.com/6HYMLQhV btw ty for the 1.7.10 update Answers: username_1: That's the deobfuscated dev version of RF Expansion. It won't work in a normal instance. Please download the correct build and try again. Status: Issue closed username_0: ops, mb, sry, by the way, i think the rf submission station has problems: -simple rf charge quest, with detect button it steal some rf(8k rf for every click, a bit boring) from the cell in my inventory - if i try with the station, it let me check the quest, then it locks but no charge happens,the internal rf storage of the station is at 0 even with power cable connected screens: http://puu.sh/DaqSC/2b3781e832.jpg http://puu.sh/DaqUk/c8a8569eb8.png i am missing something?
haskell/bytestring
42699891
Title: Lazy variants of isInfixOf and breakSubstring Question: username_0: `isSuffixOf` available in `Data.ByteString.Lazy`, but not in `Data.ByteString.Lazy.Char8` (it's commented out). Likewise `isInfixOf` is not available `Data.ByteString.Lazy`. Similarly, there's no lazy version of `breakSubstring` Is there any reason for this? Answers: username_1: The relevant part of `stringsearch`: http://hackage.haskell.org/package/stringsearch-0.3.6.6/docs/Data-ByteString-Lazy-Search.html
OpenSlides/openslides-backend
870756563
Title: Add test for meeting.delete Question: username_0: Add a test with a full meeting with groups, motions, workflows, etc. Assert that all models are deleted afterwards. Explicitely test that structured fields for this meeting are deleted as well. We probably need a method that iterates over all structured fields of all users and deletes them. Answers: username_1: Note: meeting-scoped users may also be deleted (Or shouldn't they?). @emanuelschuetze Whats your opinion here? username_0: Will implement without deleting these users for now. Can still be added later Status: Issue closed