repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
USFS-PNW/Fia-Biosum-Manager
1068746565
Title: TOOLS: Revisit display/calculations for user-edited inputs Question: username_0: Revisit user entered inputs (or user selected row from a loaded table such as a cut list or tree table or sample tree table). Diameter is hard coded when it should not be. @sorgtyler discovered that in addition to this, some of the source for the calculation references selected table row values and other parts source the user entered data. This needs to be resolved. I suspect it would make more sense to have the option of COPYING inputs from the row to the text boxes on the lower part of the form (though obviously not all inputs get copied there) and then there is no ambiguity of what the calculate volume and biomass button does—it operates on the text boxes displayed there (which the user can edit) whether that is the default tree record, something the user has entered or something copied from the matrix above. For inputs that are not shown in these textboxes, it goes with hard coded defaults or with whatever was on the row (if the row was copied there). It would, however, be better (in an ideal world) if all input fields had a text box so that they can be seen, even if many of them will not be modified (or have any effect for PNW users). See email 'Agenda for November 23 BioSum DEV call' for original contents and sample code from @sorgtyler. This request depends on the implementation of #262.
ds300/patch-package
791979868
Title: npm ERR! ERESOLVE unable to resolve dependency tree Question: username_0: can't patch-package command line npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: undefined@undefined npm ERR! Found: [email protected] npm ERR! node_modules/react npm ERR! peer react@"*" from [email protected] npm ERR! node_modules/react-native-gifted-chat npm ERR! react-native-gifted-chat@"https://registry.npmjs.org/react-native-gifted-chat/-/react-native-gifted-chat-0.16.3.tgz" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer react@"16.13.1" from [email protected] npm ERR! node_modules/react-native npm ERR! peer react-native@"*" from [email protected] npm ERR! node_modules/react-native-gifted-chat npm ERR! react-native-gifted-chat@"https://registry.npmjs.org/react-native-gifted-chat/-/react-native-gifted-chat-0.16.3.tgz" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! npm ERR! See C:\Users\Noi\AppData\Local\npm-cache\eresolve-report.txt for a full report. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\Noi\AppData\Local\npm-cache\_logs\2021-01-22T12_37_59_103Z-debug.log { status: 1, signal: null, output: [ null, <Buffer >, <Buffer 6e 70 6d 20 45 52 52 21 20 63 6f 64 65 20 45 52 45 53 4f 4c 56 45 0a 6e 70 6d 20 45 52 52 21 20 45 52 45 53 4f 4c 56 45 20 75 6e 61 62 6c 65 20 74 6f ... 1260 more bytes> ], pid: 19628, stdout: <Buffer >, stderr: <Buffer 6e 70 6d 20 45 52 52 21 20 63 6f 64 65 20 45 52 45 53 4f 4c 56 45 0a 6e 70 6d 20 45 52 52 21 20 45 52 45 53 4f 4c 56 45 20 75 6e 61 62 6c 65 20 74 6f ... 1260 more bytes>, error: null } C:\Work\beeclean\beeclean-user\node_modules\patch-package\dist\makePatch.js:183 throw e; ^ { status: 1, signal: null, output: [ null, Buffer(0) [Uint8Array] [], Buffer(1310) [Uint8Array] [ 110, 112, 109, 32, 69, 82, 82, 33, 32, 99, 111, 100, 101, 32, 69, 82, 69, 83, 79, 76, 86, 69, 10, 110, 112, 109, 32, 69, 82, 82, 33, 32, 69, 82, 69, 83, 79, 76, 86, 69, 32, 117, 110, 97, 98, 108, 101, 32, 116, 111, 32, 114, 101, 115, 111, 108, 118, 101, 32, 100, 101, 112, 101, 110, 100, 101, 110, 99, 121, 32, 116, 114, 101, 101, 10, 110, 112, 109, 32, 69, 82, 82, 33, 32, [Truncated] 18 timing config:load Completed in 5ms 19 verbose npm-session 8782ff4f57f2b18a 20 timing npm:load Completed in 10ms 21 timing command:exec Completed in 1772ms 22 verbose stack Error: command failed 22 verbose stack at ChildProcess.<anonymous> (C:\Program Files\nodejs\node_modules\npm\node_modules\@npmcli\promise-spawn\index.js:64:27) 22 verbose stack at ChildProcess.emit (node:events:379:20) 22 verbose stack at maybeClose (node:internal/child_process:1065:16) 22 verbose stack at Process.ChildProcess._handle.onexit (node:internal/child_process:296:5) 23 verbose pkgid [email protected] 24 verbose cwd C:\Work\beeclean\beeclean-user 25 verbose Windows_NT 10.0.19042 26 verbose argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "exec" "--" "patch-package" "react-native-gifted-chat" 27 verbose node v15.6.0 28 verbose npm v7.4.0 29 error code 1 30 error path C:\Work\beeclean\beeclean-user 31 error command failed 32 error command C:\Windows\system32\cmd.exe /d /s /c patch-package react-native-gifted-chat 33 verbose exit 1 Answers: username_1: same problem, any solutions? username_1: ok, temporary solution was to go to node_modules/patch-package/dist/makePatch.js, go down to line 90 and add in "--force" in the spawnSafeSync like so: ``` try { // try first without ignoring scripts in case they are required // this works in 99.99% of cases spawnSafe_1.spawnSafeSync(`npm`, ["i", "--force"], { cwd: tmpRepoNpmRoot, logStdErrOnError: false, }); } ```
postmanlabs/postman-app-support
772113900
Title: Enable setting options to display Unicode or \n \r Question: username_0: <!-- Please read through the [guidelines](https://github.com/postmanlabs/postman-app-support#guidelines-for-reporting-issues) before creating a new issue. --> **Is your feature request related to a problem? Please describe.** Sometimes, when quickly editing an environment we might press enter to validate and receive a carriage return inserted which is hard to pin down. Even though this is nice, it's only displayed while editing the env, and not in quick preview.​ The carriage return symbol below: ![image](https://user-images.githubusercontent.com/47771757/102774896-debaf980-43b1-11eb-96e4-7fb2c305cb82.png) **Describe the solution you'd like** Provide an option to enable setting such that they include \r or \n to clearly indicate spaces (especially when it’s at the end of an environment variable name) Answers: username_1: I've updated the title to "Whitespace identifiers missing in Environment Quick Preview" to clarify the issue. 😄
saltstack/salt
72291888
Title: Caller not available in reactors Question: username_0: The Caller function is not available in reactors. This is odd, since reactors by default must run on the master. Why use LocalClient when you can directly use Caller? Add Caller functionality to reactor. Answers: username_0: See PR #23245 username_1: @username_0, thanks for working on this. I'm not an expert on Caller vs LocalClient or the reasons LocalClient was chosen for reactor work and not Caller. You may want to talk to @cachedout or @username_2 about it. username_2: I imagine this wasn't added in the past since "caller" runs salt minion modules on the minion you execute on-- in this case the master. If you add it it would be okay, but it will have a few odd side-effects-- mostly that module loading/reloading within the master's reactor is... odd ;) username_3: Looks like this functionality was added somewhat recently, so I am going to close this. If more work needs to be done, please let us know and we can re-open, or feel free to open a new issue. Thanks! Status: Issue closed
rivantsov/vita
285213702
Title: Could be basis of a GraphQL server? Question: username_0: Hi, I found this project while researching GraphQL. Needed a parser and I've used Irony. I didn't know about VITA until today. VITA looks like it could be the basis of a GraphQL server. I've frankly not found any good GraphQL servers for SQL Server written in .net. Just an idea I wanted to share and get feedback on. Answers: username_1: Well, you just read thru my plans. It's nice to find out that somebody else is thinking in the same direction! Yes, GraphQL is definitely in the plans, for sure. After building a few REST ful API's, getting tired of it (repeating same thing over and over), seeing all shortcomings, I agree with all the reasoning that GraphQL folks present in favor of new approach. I think GraphQL is the answer, and path to the future. Something like VITA-based GraphQL server is in the plans for the next year. And parsing will be involved, so comes back Irony - I will have to refresh it, put official NET Core version, fix some obvious deficiencies/troubles, and then build GQL parser with VITA as engine. But first - moving VITA to .NET Core, actively working on it, planning to push early next year. After that - GraphQL it is! Happy New Year! username_0: Hi Roman, That's excellent to hear! I've done much work in the past on generic proxies to RDBMS schemas so perhaps I can assist. But even before that I might assist with Irony - since I'll be migrating a project that uses Irony to .Net Core 2.X very shortly. I did find this fork [https://github.com/daxnet/irony] which seems to indicate that not much needed to be changed for a minimal port. But not all of Irony was ported. username_1: finally started looking at GraphQL, and thinking about implementing with VITA. Here is initial project, for now just grammar/parser: https://github.com/username_1/ngraphql Status: Issue closed username_2: please implement this feature username_1: I will, definitely, working on it. But pls understand - it's a huge challenge
qooxdoo/qooxdoo-compiler
322852520
Title: Add qx create option to avoid all cursor control Question: username_0: At present, the wizard for `qx create` attempts to overwrite each question on the screen with the next question, i.e., there are a lot of cursor control commands being written to the screen. It would be nice to have an option that eliminated all of the cursor control output so that the wizard would work nicely on a dumb terminal (or in an emacs shell). With this option enabled, each question would appear on a line, the user would respond and press Enter, which would move the cursor to the next line where the subsequent question would be presented. Answers: username_1: Looks like [inquirer](https://www.npmjs.com/package/inquirer), the npm library that we use for user interaction, doesn't provide any option for nicely degrading on dumb terminals. Need to do some research on alternatives that have better dumb terminal support, maybe one of the following? - https://www.npmjs.com/package/promptly - https://www.npmjs.com/package/prompts - https://www.npmjs.com/package/readline-sync - https://www.npmjs.com/package/prompt-sync - https://www.npmjs.com/package/enquirer username_2: Surely it would be easy to just write an `ask()` method that decides whether to use cursor based or readline? username_1: Yes, but we have multiple-choice questions that would need to be manually translated into simple prompt, which is a chore I'd rather avoid :-) username_3: how about just picking the default if the thing is called without cursor control ... I guess the answers can be supplied on the commandline too username_1: @username_3 Yes, you always have the option to supply all information via CLI parameters. But I think @username_0 wants the interactivity without the cursor control :-) username_1: @username_0 Is this still a problem that seriously affects your workflow? Or is the CLI params option enough for your needs? username_0: `qx create` is used infrequently enough that if it's a huge amount of work to implement this, it's reasonable for the user (me, in this case) to open up a separate window (outside of emacs) to run the command. OTOH, I don't know that I've seen many other apps that do cursor control that don't provide a means of disabling it. From my perspective, this certainly need not be a blocker to first release, but should be "fixed" eventually rather than being closed.
INRIA/scikit-learn-mooc
988969932
Title: Inconsistent use of `ColumnTransformer` and `make_column_transformer` Question: username_0: Most of the notebooks use `ColumnTransformer` glossary.md predictive_modeling_module_take_away.md: wrap_up_quiz 1 03_categorical_pipeline_column_transformer.py 03_categorical_pipeline_ex_02.py 03_categorical_pipeline_sol_02.py parameter_tuning_ex_02.py and parameter_tuning_sol_02.py parameter_tuning_ex_03.py and parameter_tuning_sol_03.py parameter_tuning_grid_search.py parameter_tuning_nested.py parameter_tuning_randomized_search.py On the other hand, `make_column_transformer` is only used in ensemble_random_forest.py wrap_up_quiz 1 wrap_up_quiz 4 wrap_up_quiz 5 I recommend keeping either one or the other to avoid possible confusion. Keeping `make_column_transformer` means more editing in the present contents but will reduce variance naming (and therefore calling) the transformers. What do you think? Answers: username_1: I would use both but make sure that we present both. We should have the same issue with `Pipeline` and `make_pipeline`. I see that we do a slightly better job stating that `make_pipeline` is creating a `Pipeline`. But I think it would be worth showing how to create a `Pipeline` directly. The reason to use both is that there are time where you want to use one or another, specifically.
smacademic/project-cgkm
440697519
Title: Need Time due to arcs and tasks Question: username_0: Currently there is no way for users to see what time a task or arc is due by, they can only see the date. In order to add time we can do two things: 1) Make the `dueDate` in arc and task also contain the time 2) Add another time attribute to both arc and task I believe it is best to go with option 2 as many of our date related functions compare date strings and adding time to the end of the strings may make functions more complicated then needed. Answers: username_1: I would also opt for option 2 as not all tasks will want a time associated with them and option 2 allows for this. username_2: I agree with @username_1. An additional, optional attribute in the DB would work. username_3: I also agree with the previous reply that option 2 would be preferable Status: Issue closed
Automattic/mongoose
296854302
Title: Cloning a schema does not seem to clone virtuals Question: username_0: <!-- *Before creating an issue please make sure you are using the latest version of mongoose --> **Do you want to request a *feature* or report a *bug*?** Bug **What is the current behavior?** When cloning a schema, virtuals defined on the original schema do not seem to be copied over to the new schema. **If the current behavior is a bug, please provide the steps to reproduce.** <!-- If you can, provide a standalone script / gist to reproduce your issue --> ```js var assert = require('chai').assert var mongoose = require('mongoose') var Schema = require('mongoose/lib/schema'); describe('Schema.clone', function() { it('Correctly clones a schema virtual', function(done) { var UserSchema = new Schema({ firstName: { type: String, required: true }, lastName: { type: String, required: true } }) UserSchema.virtual('fullName').get(function () { return this.firstName + ' ' + this.lastName }) // Not really part of the test case, just for extra re-assurance UserSchema.methods.getFullName = function () { return this.firstName + ' ' + this.lastName } const User = mongoose.model('user', UserSchema) const user = new User() user.set({ firstName: 'Jane', lastName: 'Doe' }) assert.equal(user.fullName, '<NAME>') const clonedUserSchema = UserSchema.clone() const ClonedUser = mongoose.model('user-clone', clonedUserSchema) const user2 = new ClonedUser() user2.set({ firstName: 'Jack', lastName: 'Doe' }) // Passes! assert.equal(user2.firstName, 'Jack') assert.equal(user2.lastName, 'Doe') assert.equal(user2.getFullName(), '<NAME>') // the getter method works! // Failing :( assert.equal(user2.fullName, '<NAME>') done() }) }) ``` **Please mention your node.js, mongoose and MongoDB version.** node 8.9.4, mongoose 5.0.4 (also tested against mongoose 4.13.11 and latest master) Answers: username_1: Thanks for the complete repro script @username_0 !!!! verified: ``` /Users/username_1/dev/Help/5/: mocha 6133.js Schema.clone 1) Correctly clones a schema virtual 0 passing (33ms) 1 failing 1) Schema.clone Correctly clones a schema virtual: ReferenceError: clonedUserSchema is not defined at Context.<anonymous> (6133.js:25:17) ``` in schema.js: ``` Schema.prototype.clone = function() { var s = new Schema(this.paths, this.options); // Clone the call queue var cloneOpts = {}; s.callQueue = this.callQueue.map(function(f) { return f; }); s.methods = utils.clone(this.methods, cloneOpts); s.statics = utils.clone(this.statics, cloneOpts); s.query = utils.clone(this.query, cloneOpts); s.plugins = Array.prototype.slice.call(this.plugins); s._indexes = utils.clone(this._indexes, cloneOpts); s.s.hooks = this.s.hooks.clone(); return s; }; ``` and schema.test.js: ``` it('clone() copies methods, statics, and query helpers (gh-5752)', function(done) { ... it('clone() copies validators declared with validate() (gh-5607)', function(done) { ``` @username_2 I can tackle this as long as the exclusion of virtuals wasn't intentional. username_2: Thanks for reporting, will fix ASAP :+1: Status: Issue closed
yiisoft/yii2-queue
268491944
Title: Mistake ? Question: username_0: class DownloadJob extends Object implements \yii\queue\JobInterface instead of class DownloadJob extends Object implements \yii\queue\Job ? Answers: username_1: Nope. Status: Issue closed username_0: Версия которая установилась через composer не содержит JobInterface. там есть просто Job , я всю голову сломал пока не победил этот баг. Получается несостыковка документации и релиза. username_0: $composer require --prefer-dist yiisoft/yii2-queue Using version ^2.0 for yiisoft/yii2-queue $ ls cli closure debug drivers ErrorEvent.php ExecEvent.php gii JobEvent.php Job.php LogBehavior.php PushEvent.php Queue.php RetryableJob.php serializers So there is no JobInterface. username_2: the change is not released yet. username_0: From docs: The preferred way to install this extension is through composer. Either run php composer.phar require --prefer-dist yiisoft/yii2-queue And then in section basic usage : class DownloadJob extends Object implements \yii\queue\JobInterface Pls change example , or installation instruction (with composer). username_1: You are reading docs from master and they're about code from master. Here's what you need: https://github.com/yiisoft/yii2-queue/blob/2.0.0/docs/guide/README.md
TaleLin/lin-ui
522789029
Title: Tag组件cell属性文档描述错误 Question: username_0: **LinUI版本(必填):** 0.6.5 **设备(必填):** 开发者工具 **基础库版本(必填):** 2.4.3 **UI问题,请附上截图** ![image](https://user-images.githubusercontent.com/49727104/68852118-38aeb180-0712-11ea-8b70-b8b371b54831.png) **重现步骤,必要时请提供代码片段链接** 文档写着cell属性接收一个字符串,传入字符串后点击的事件回调里cell是null,看源码这个属性是接收一个Object,传入Object后正常。 Answers: username_1: 感谢指正 我们会尽快修改 :octocat: [From gitme Android](http://flutterchina.club/app/gm.html) username_2: 已经修改。改写指正。 Status: Issue closed
nanocurrency/nano-node
1127794975
Title: Add descriptions to class files Question: username_0: ```bootstrap_initiator: Client side portion to initiate bootstrap sessions. Prevents multiple legacy-type bootstrap sessions from being started at the same time. Does permit lazy/wallet bootstrap sessions to overlap with legacy sessions. bootstrap_attempts: Container for bootstrap sessions that are active. Owned by bootstrap_initiator. bootstrap_listener: Server side portion of bootstrap sessions. Listens for new socket connections and spawns bootstrap_server objects when connected bootstrap_server: Owns the server side of a bootstrap connection. Responds to bootstrap messages sent over the socket. bootstrap_attempt: Polymorphic base class for bootstrap sessions. bootstrap_attempt_lazy: Lazy bootstrap session. Started with a block hash, this will "trace down" the blocks obtained to find a connection to the ledger. This attempts to quickly bootstrap a section of the ledger given a hash that's known to be confirmed. bootstrap_attempt_wallet: Wallet bootstrap session. This session will trace down accounts within local wallets to try and bootstrap those blocks first bootstrap_attempt_legacy: Legacy bootstrap session. This is made up of 3 phases: frontier requests, bootstrap pulls, bootstrap pushes. bootstrap_client: Owns the client side of the bootstrap connection. bootstrap_connections: Container for bootstrap_client objects. Owned by bootstrap_initiator which pools open connections and makes them available for use by different bootstrap sessions. frontier_req_server: Server side of a frontier request. Created when a bootstrap_server receives a frontier_req message and exited when end-of-list is reached frontier_req_client: Client side of a frontier request. Created to send and listen for frontier sequences from the server bulk_pull_server: Server side of a bulk_pull request. Created when bootstrap_server receives a bulk_pull message and is exited after the contents has been sent. If the 'start' in the bulk_pull message is an account, send blocks for that account down to 'end'. If the 'start' is a block hash, send blocks for that chain down to 'end'. If end doesn't exist, send all accounts in the chain. bulk_pull_client: Client side of a bulk_pull request. Created when the bootstrap_attempt wants to make a bulk_pull request to the remote side bulk_push_server: Server side of a bulk_push request. Receives blocks and puts them in the block processor to be processed bulk_push_client: Client side of a bulk_push request. Sends a sequence of blocks the other side did not report in their frontier_req response.```<issue_closed> Status: Issue closed
PixelsByLucas/share-stuff
677363644
Title: Create Return Flow Question: username_0: Requires back end and front end changes. New routes, notification components, potentially new models. ![image](https://user-images.githubusercontent.com/40898387/89973556-9c78f300-dc2e-11ea-86e3-ffb297a8d0ee.png)<issue_closed> Status: Issue closed
dart-lang/sdk
450554280
Title: Should we be using Since? Question: username_0: https://github.com/dart-lang/sdk/blob/a25f927ba9b758b9648d4b375a8fcee84b28f78d/sdk/lib/internal/internal.dart#L200-L204 aa2ce7cfbf just landed `HttpClientResponse. compressionState` and `HttpClientResponseCompressionState` Should these be annotated w/ `Since`? @username_2 @turnidge @username_1 Should we look to get Analyzer to support `Since` so it's useful for end users? @stereotype441 @username_4 @bwilkerson Answers: username_0: Ditto for * `dart:developer` in I33b8324e9c16fb12e80dd91beb275320b64f7316 * new RegExp features in 4028fec3b56703752dbab6b5d5647fb9ac204774 username_0: CC @mit-mit username_0: Ditto for `dart:isolate` debug name - ac2c934563fb10a0033612821e595894bcf9ded9 username_1: @turnidge's input would surely be invaluable, but @username_3 is probably the more correct Todd in this case. username_2: True, we should use it. If for nothing else, then for documentation. username_3: `HttpClientResponse.compressionState` and `HttpClientResponseCompressionState` are done as of https://github.com/dart-lang/sdk/commit/ebbfc7d8ca5366888b832cf83ee0df49e8f1b745 username_2: For the record, new features should be releases in minor version increments, so the next release where new features are released will be `2.4.0`. If we release a `2.3.2`, it will be a patch/bug-fix release cherry-picked from head, and it will most likely not contain any of the new features. So, for not-yet-released features, the annotations should be `@Since("2.4.0")`. username_4: I think the sdk version doesn't follow strict semver, and won't always rev the minor version even when there are api additions (reving the sdk version is more of a product decision than a semver one). Status: Issue closed username_2: We should generally use `Since` on new features.
jetspace/desktop
137036013
Title: Crash of side-panel-settings Question: username_0: ## Reporting issue in application side-settings-explorer @ 0.92-13 The settings manager crashes if you try to open the panel tab ### Short description: It does happen at any time, so I guess it is critical. I have all plugins disabled, which is might the problem there... ### Steps to reproduce: Open panel settings with no active plugins -> crash ### Ideas: I guess something like sizeof instead of strlen..<issue_closed> Status: Issue closed
seqan/seqan
225642443
Title: Bug in alignedReadStore ? Question: username_0: ```cpp void countReadsPerGene(String<unsigned> & readsPerGene, String<TIntervalTree> const & intervalTrees, TStore const & store){ resize(readsPerGene, length(store.annotationStore), 0); String<TId> result; int numAlignments = length(store.alignedReadStore); // iterate aligned reads and get search their begin and end positions SEQAN_OMP_PRAGMA(parallel for private (result)) for (int i = 0; i < numAlignments; ++i) { TAlignedRead const & ar = store.alignedReadStore[i]; TPos queryBegin = _min(ar.beginPos, ar.endPos); // In gapped space TPos queryEnd = _max(ar.beginPos, ar.endPos); // In gapped space // search read-overlapping genes findIntervals(result, intervalTrees[ar.contigId] /*Ungapped space*/, queryBegin, queryEnd); // increase read counter for each overlapping annotation given the id in the interval tree for (unsigned j = 0; j < length(result); ++j) { SEQAN_OMP_PRAGMA(atomic) readsPerGene[result[j]] += 1; } }} ``` Answers: username_0: @username_1 have you had time to look at this? username_1: @username_0 @rrhan As you all know I am not an expert of this issue. But I just read the tutorial for this user since you have requested. http://seqan.readthedocs.io/en/master/Tutorial/HowTo/UseCases/SimpleRnaSeq.html According to the tutorial, it starts with loading of .gff (genes) and .bam (reads) files - not with the alignment. Hence all the positions have to be based on given files and this can't be the seqan's gap representation. I guess he is developing his own application that involves alignment somehow. But this is just my speculation. I can't find any clue why he think it is wrong. username_1: @username_0 @rrahn I think we can close this. Please check the comment above and let me know your opinion. Status: Issue closed
mezz/JustEnoughItems
1051299529
Title: JEI "configured mod" is nowhere to be found. Question: username_0: When I click the wrench in game it tells me I need this "configured" mod to access the JEI config but I can't find this mod anywhere. It's not under JEI dependencies in Curseforge and the only thing that comes up on google is Mr.Crayfish's configured mod, so I don't really know what to do to access the JEI settings. Answers: username_1: Mr. Crayfish "Configured" is correct: https://www.curseforge.com/minecraft/mc-mods/configured
lordmilko/PrtgAPI
329591820
Title: Object Location not getting properly set Question: username_0: Hello! Great framework you've created here, I've been using it some time with our setup and it works great. The lone issue I have right now is that if I create an object, set its InheritLocation property to False and its Location property to GPS coordinates, the location is set, but when I check within the PRTG GUI, it is set to a less-precise street address. However, I can enter the same coordinates via PRTG's GUI and they work fine and stay recorded as GPS coordinates. Any ideas what the issue might be? Thanks!! Jonas Answers: username_1: Hi @username_0, PrtgAPI automatically performs a geo lookup on the location that you specify, with the implication being you entered an address and now need to resolve its coordinates, and then sets the address and lon/lat based on the location data that is returned. PRTG also performs a geolookup on the specified location (GPS coordinate or street address), but leaves the display location as whatever you entered. If you specify `-Verbose` to `Set-ObjectProperty` and look at the URL that is executed, is the `lonlat_=` value in the request different to the coordinates you entered? (albiet, potentially flipped from lat/lon to lon/lat) If so, you can potentially bypass the geolookup mechanism by setting your location using the raw API, as follows ```powershell Get-Device -Id 1001 | Set-ObjectProperty -RawParameters @{ locationgroup=0 lonlat_="-73.998672,40.714728" location_="40.714728,-73.998672" } -Force ``` The example above shows how to set the location of the device with ID 1001 to the example GPS coordinates shown in the *Location (for Geo Maps)* help popup on the object Settings page. An important thing to note is that it is very important you put your lon/lat in the correct order, otherwise PRTG will not correctly show the location on the map. When you enter lat/lon location **40.714728,-73.998672**, after performing the geo lookup PRTG appears to resolve this to the same location with the lat/lon flipped. Since we are bypassing the geo lookup, we need to ensure we format things correctly. Note that when setting the `Location` (or any other property) using `ObjectProperty`/`ChannelProperty` enumeration values, you don't need to set `InheritLocation` to false, since PrtgAPI knows which settings each property is "dependent" on in order to be active and will automatically include these for you in your API request. Regards, username_1 Status: Issue closed username_0: Thanks, your workaround above using RawParameters works great!! Issue closed. username_1: Hi @username_0, Please be advised that as of version 0.9.6 PrtgAPI now supports specifying raw GPS location coordinates natively ```powershell Get-Device -Id 1001 | Set-ObjectProperty Location 40.714728,-73.998672 ``` Regards, username_1
snakemake/snakemake
695554021
Title: Pull from remote repositry or setting the configuration file for Conda and Singularity? Question: username_0: To my knowledge, in snakemake workflow, the analysis environment is fixed by writing a YAML-style configuration file for conda, and by pulling images placed in DockerHub/SingularityHub for singularity. However, I think it would be better to be able to specify the URLs of remote repositories (e.g. https://anaconda.org/bioconda/sailfish) in the conda tag. Likewise, I want to specify a Dockerfile or Singularity definition file (e.g. https://hub.docker.com/r/zavolab/salmon/dockerfile) in local machine in container tag to specify the process to create the environment.
kubernetes/kubeadm
363864817
Title: Problem with Init pod Question: username_0: Hi all ! I have problem with execute command uadmin@kubernetes-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version v1.12.0 [sudo] password for uadmin: [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [WARNING KubernetesVersion]: kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. kubernetes version: 1.12.0. Kubeadm version: 1.11.x I0926 05:27:35.104772 76803 kernel_validator.go:81] Validating kernel version I0926 05:27:35.105030 76803 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03 [preflight] Some fatal errors occurred: [ERROR Port-6443]: Port 6443 is in use [ERROR Port-10251]: Port 10251 is in use [ERROR Port-10252]: Port 10252 is in use [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [ERROR Port-10250]: Port 10250 is in use [ERROR Port-2379]: Port 2379 is in use [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` But command uadmin@kubernetes-master:~$ netstat -a | grep 102 shows me uadmin@kubernetes-master:~$ netstat -a | grep 102 tcp 0 0 localhost.localdo:10248 0.0.0.0:* LISTEN tcp 0 0 localhost.localdo:10251 0.0.0.0:* LISTEN tcp 0 0 localhost.localdo:10252 0.0.0.0:* LISTEN tcp 0 0 localhost.localdo:10251 localhost.localdo:51418 TIME_WAIT tcp 0 0 localhost.localdo:10252 localhost.localdo:35284 TIME_WAIT tcp 0 0 localhost.localdo:35258 localhost.localdo:10252 TIME_WAIT tcp 0 0 localhost.localdo:10252 localhost.localdo:35204 TIME_WAIT tcp 0 0 localhost.localdo:10251 localhost.localdo:51360 TIME_WAIT tcp 0 0 localhost.localdo:10251 localhost.localdo:51484 TIME_WAIT tcp 0 0 localhost.localdo:10252 localhost.localdo:35324 TIME_WAIT tcp 0 0 localhost.localdo:51598 localhost.localdo:10251 TIME_WAIT tcp 0 0 localhost.localdo:35348 localhost.localdo:10252 TIME_WAIT tcp 0 0 localhost.localdo:10251 localhost.localdo:51508 TIME_WAIT tcp 0 0 localhost.localdo:10251 localhost.localdo:51444 TIME_WAIT tcp 0 0 localhost.localdo:10252 localhost.localdo:35438 TIME_WAIT tcp6 0 0 [::]:10250 [::]:* LISTEN Is the port 10251 in use ? I do not see any process which uses port 10251. Any idea ? Answers: username_1: @username_0 try calling `kubeadm reset` first. it seems like `kubeadm init` was already called on this node. /priority awaiting-more-evidence username_0: Thanks ! It help ! But with errors uadmin@kubernetes-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version v1.12.0 [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [WARNING KubernetesVersion]: kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. kubernetes version: 1.12.0. Kubeadm version: 1.11.x I0926 05:46:10.804698 83029 kernel_validator.go:81] Validating kernel version I0926 05:46:10.805059 83029 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.12.0]: exit status 1 [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.12.0]: exit status 1 [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler-amd64:v1.12.0]: exit status 1 [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy-amd64:v1.12.0]: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` uadmin@kubernetes-master:~$ sudo kubeadm config images pull unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: dial tcp: lookup dl.k8s.io on 127.0.0.53:53: server misbehaving username_1: you don't seem to have internet access. you can download the images when you have internet and try again. /close username_0: Hmm, very interesting. I have stable internet connection. Ok. I will try later. username_2: username_1 After executing the following commands that you mentioned, it still has one of the worker nodes NotReady. It is marked with ContainerCreating. ``` $ sudo kubeadm reset $ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ``` While I tried to configure rbac, it reminds me of the error of 404. ` kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml ` error: unable to read URL "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml", server reported 404 Not Found, status code=40 username_1: hi, flannel is pretty much unsupported by the kubeadm team at this point due to a number of bugs. you can try another CNI plugin. username_2: Hi username_1 How to install CNI for Flannel? I see the following commands recommended by someone. But I could not execute it. ``` go get -d github.com/containernetworking/plugins cd ~/go/src/github.com/containernetworking/plugins ./build.sh sudo cp bin/* /opt/cni/bin/ ``` I am in the Nvidia Jetson AI Dev Environment that enable Docker. But Jetson is pre-install Ubuntu 18.04. At present, I operate it in the Command Line interface of Jetson. Please indicate how to install the CNI for Flannel. Best regards username_1: i haven't tried flannel in a while. you can try asking in the support channels, such as #kubeadm on k8s slack. username_2: ``` ~/go/src/github.com/containernetworking/plugins$ sudo cp bin/* /opt/cni/bin/ ``` It addresses the completion of the CNI plugins. username_2: **4. Get nodes** I can get all the nodes with the Status of Ready now after executing the commands on the Master node. ``` $ kubectl get nodes ``` Notes: Before setting up the Flannel, you must complete the master node initiation of Kubernetes and worker nodes joining. Cheers.
ant-design/ant-design
319242163
Title: 上传文件组建无法显示文件名 // Upload Component couldn't display file name Question: username_0: < (4) ["name", "lastModified", "lastModifiedDate", "webkitRelativePath"] ``` ### 临时解决方案(Monkey-patch workaround) 在引导文件加入或引用下述代码 // Add or reference the following code in app entry: ```js const uploadUtils = require('antd/es/upload/utils'); uploadUtils.fileToObject = function (file) { const fileDuplicated = {}; if (file && file.__proto__) { Object.keys(file.__proto__).forEach(key => { fileDuplicated[key] = file[key]; }); } return { // file: uid ...file, // fileDuplicated: other `File` props. ...fileDuplicated, percent: 0, originFileObj: file, }; }; ``` --- 题外话:前两天还是正常的,今天突然显示不了了,应该是浏览器更新了。 <!-- generated by ant-design-issue-helper. DO NOT REMOVE --> Answers: username_0: 感觉把 issue 连接到 react-component/upload 会比较好一些… username_1: ![image](https://user-images.githubusercontent.com/8317101/39504174-61de969a-4dfc-11e8-9fc1-3154459d1390.png) 应该是这个change引起的,可不可以回退回去? Status: Issue closed
privacy-protection-tools/anti-AD
625900233
Title: 你删issue的样子 像极了GFW Question: username_0: ### https://github.com/privacy-protection-tools/anti-AD/issues/140 **点击上面的链接,看看像不像你在墙内互联网上经常碰到的404 还是恭喜你现在也有机会掌握一点点404的权力 我开的 issue 中完全不包含任何会让人不适内容 只是简单地引用[neoHosts](https://github.com/neoFelhz/neohosts)来举例说明 此列表不应因个人喜好而包含非介绍中所述的广告、追踪条目 即使你不认同neoFelhz的理念执意要在其中加料 我也认为像[yhost](https://github.com/vokins/yhosts/wiki/%E9%83%A8%E5%88%86%E9%97%AE%E9%A2%98%E8%AF%B4%E6%98%8E)那样大大方方做出声明是最基本的道德底线 然而你选择了直接删除issue 熟练的动作像极了GFW 容不下的东西就让它去消失 至此这个repo在我眼里已经不可被信任 我会尽我能力让其他列表订阅者知道这段往事 纵使你继续删除让它变为404 可是互联网是有记忆的 也不是所有的人记忆都很短暂 唯一遗憾的是 #140 issue 只剩下不包含我原文的邮件通知可供后来人窥知一二 不过我不会再犯这个错误了** ![image](https://user-images.githubusercontent.com/22477230/83049623-0cd79580-a07e-11ea-94e6-40914c17ca69.png)
winlibs/libxslt
476246252
Title: LibXSLT version 1.1.33 Question: username_0: Directadmin surprised me with an update of LibXSLT to 1.1.33: https://forum.directadmin.com/showthread.php?t=58452 The update is already 6 months old: https://github.com/GNOME/libxslt/releases LinuxFromScratch has an additional security patch for 1.1.33. See http://www.linuxfromscratch.org/blfs/view/cvs/general/libxslt.html<issue_closed> Status: Issue closed
exevil/sketch-grid-master
217133922
Title: Layout Settings for All Artboards doesn't work Question: username_0: It only updates the layout for the current artboard. Answers: username_1: @username_0: Hey! Sorry for the delay. Just checked, you can now assign grid and layout for for all artboards when you hit regular `View → Canvas → Grid/Layout Settings...` from the app menu with no artboard selected. So I'll remove this functionality from plugin in next commit. Thanks anyway! 👍 Status: Issue closed
swar/nba_api
562174467
Title: Get Player Championships per Season Question: username_0: thanks for all the recent help with timeouts i dont know a way to get championships for each player (ie, how many championships did Lebron win) i can get for each team, but that might be fuzy if a player is traded in the middle of the season any endpoints? Status: Issue closed Answers: username_1: No endpoints to my knowledge. You're probably better off getting the rosters for the championship games and finding players that way and getting a distinct count on years. Closing this out, feel free to open if you have another followup.
geeklearningio/gl-dotnet-email
349733201
Title: Special characters issue Question: username_0: When writing raw text in the *.hbs file, special characters as é, à, è aren't printed right in the email. I tried all possible encodings for the *.hbs files. I had to use character entities such as "&eacute;" instead. I'm not talking about variables enclosed in braces but raw text directly in the template though. Answers: username_1: Seems it's an unspecified charset causing this issue: https://www.emailonacid.com/blog/article/email-development/the_importance_of_content-type_character_encoding_in_html_emails/ Is it with sendgrid or smtp? username_0: Yes exactly. It's with smtp, I tried to specify the encoding within the template, and tried to change the `SmtpEmailProvider.cs ` class and add ` message.Headers.Add("Content-Type", "content=text/html; charset=\"UTF-8\"");` to check if the problem could be solved that way. Outlook is still misreading special characters. Would be ideal though if I could stick with specifying the content type in the *.hbs files, but I read here https://stackoverflow.com/questions/16255487/encoding-to-utf-8-in-email that MailMessage removes the content type tags in the html body, maybe MimeMessage would do the same... username_0: The problem is clearly coming from MimeKit, I tried a little console app and I have exactly the same problem. So we might open an issue in their repo ? FYI, when using System.Net.Mail, it works fine: ``` var message = new MailMessage("<EMAIL>", "<EMAIL>", "test", "é à è"); message.Subject = "test"; message.BodyTransferEncoding = System.Net.Mime.TransferEncoding.Base64; using (var client = new SmtpClient("127.0.0.1", 25)) { client.Send(message); } ``` Now if you change the `BodyTransferEncoding` property to 8 bit , then we run in the same exact problem as with Mime Kit... username_2: So we should follow [this simple fix](https://github.com/jstedfast/MimeKit/issues/424#issuecomment-412350376), and allow, maybe in a second step, to configure the chosen ContentTransferEncoding. Status: Issue closed username_2: Fixed by #35, thanks @username_0 !
dotnet/wpf
473082032
Title: ReadyToRun images of WPF applications crash Question: username_0: Project: ```xml <Project Sdk="Microsoft.NET.Sdk.WindowsDesktop"> <PropertyGroup> <OutputType>WinExe</OutputType> <TargetFramework>netcoreapp3.0</TargetFramework> <UseWPF>true</UseWPF> <PublishTrimmed>true</PublishTrimmed> <PublishReadyToRun>true</PublishReadyToRun> <PublishSingleFile>true</PublishSingleFile> <RuntimeIdentifier>win-x64</RuntimeIdentifier> <Platforms>x64</Platforms> <OutputPath>bin\X64\Release\</OutputPath> <AssemblyVersion>1.0.1.7144</AssemblyVersion> <FileVersion>1.0.1.7144</FileVersion> </PropertyGroup> </Project> ``` MainWindow.xaml ```xaml <Window x:Class="wpf1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:wpf1" mc:Ignorable="d" Title="MainWindow" Height="450" Width="800"> <Grid> <Button x:Name="hello" Width="100" Height="35"></Button> </Grid> </Window> ```` build this using a recent preview8 SDK like this: `dotnet publish -r win-x64 -c release `, then run the exe produced under `bin\x64\release\netcoreapp3.0\win-x64\publish\` global.json ```json { "sdk": { "version": "3.0.100-preview8-013417" } } ``` Observed: Crash - `FileNotFoundException` for `System.Diagnostics.Debug.dll` Answers: username_0: When producing ReadyToRun images, the ILLinker is configured to skip C++/CLI images. See https://github.com/mono/linker/issues/651 and https://github.com/mono/linker/pull/658. In turn, this results in a failure of dependencies of such assemblies (like System.Diagnostics.Debug.dll, which is required by DirectWriteForwarder.dll) from being identified and included in the ReadyToRun images. username_0: Project: ```xml <Project Sdk="Microsoft.NET.Sdk.WindowsDesktop"> <PropertyGroup> <OutputType>WinExe</OutputType> <TargetFramework>netcoreapp3.0</TargetFramework> <UseWPF>true</UseWPF> <PublishTrimmed>true</PublishTrimmed> <PublishReadyToRun>true</PublishReadyToRun> <PublishSingleFile>true</PublishSingleFile> <RuntimeIdentifier>win-x64</RuntimeIdentifier> <Platforms>x64</Platforms> <OutputPath>bin\X64\Release\</OutputPath> <AssemblyVersion>1.0.1.7144</AssemblyVersion> <FileVersion>1.0.1.7144</FileVersion> </PropertyGroup> </Project> ``` MainWindow.xaml ```xaml <Window x:Class="wpf1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:wpf1" mc:Ignorable="d" Title="MainWindow" Height="450" Width="800"> <Grid> <Button x:Name="hello" Width="100" Height="35"></Button> </Grid> </Window> ```` build this using a recent preview8 SDK like this: `dotnet publish -r win-x64 -c release `, then run the exe produced under `bin\x64\release\netcoreapp3.0\win-x64\publish\` global.json ```json { "sdk": { "version": "3.0.100-preview8-013417" } } ``` Observed: Crash - `FileNotFoundException` for `System.Diagnostics.Debug.dll` username_0: This depends on https://github.com/mono/linker/issues/676 Status: Issue closed username_0: Closing - this seems to be working in preview 8 3.0.100-preview8-013656
McJtyMods/TheOneProbe
516990580
Title: The entity models keep spinning in HUD Question: username_0: ![image](https://user-images.githubusercontent.com/47871887/68104694-551c4400-fef1-11e9-9c51-d779a0a4bf08.png) Minecraft 1.14.4 Forge 28.1.76 OptiFine HD U F4 theoneprobe-1.14-1.4.37 Answers: username_0: e.g. item frames, entities like cow, parrots, etc. username_1: This is also happening to me, same versions of everything except for Optifine, I'm not using that.
nasa/MLMCPy
483645551
Title: Should cache be generated with the same inputs for each level? Question: username_0: Current, different inputs are used for each level of the cache. Which means more model evaluations are done up front (for example, level 1 with levels0-2 is evaluated for two sets of different sets of inputs for calculating differences with levels 2/0.), but more outputs are reused come simulation time (?). Is this the right strategy?
EvotecIT/PSWriteHTML
774753529
Title: Question table display position Question: username_0: Hello, I have the following code to display two tables. They are displayed side by side, but I want first display the "Database-Summary" table and under this the "Database-Details" table. I can't figure out where the problem is. `New-HTML -TitleText 'Database-Reports' -FilePath "C:\temp\test.html" -Online -ShowHTML { New-HTMLSection -HeaderText 'Database-Report' -BorderRadius 15px { New-HTMLSection -HeaderText 'Database-Summary' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable ($OutputDatabasesSum | Select-Object -Last 1) -DisablePaging -DisableSearch -HideButtons -HideFooter -ScrollCollapse -FixedHeader -Style cell-border } } New-HTMLSection -HeaderText 'Database-Details' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable $OutputDatabases -PagingLength 10 -HideFooter -ScrollCollapse -FixedHeader -Style cell-border -Buttons excelHtml5, pageLength, searchPanes { New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Failed' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'FailedAndSuspended' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'NotApplicable' -BackgroundColor orange -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Crawling' -BackgroundColor orange -Color white -ComparisonType string } } } } }` Answers: username_1: You can use New-HTMLContainer to force it into up/down ```powershell New-HTML -TitleText 'Database-Reports' -FilePath "C:\temp\test.html" -Online -ShowHTML { New-HTMLSection -HeaderText 'Database-Report' -BorderRadius 15px { New-HTMLContainer { New-HTMLSection -HeaderText 'Database-Summary' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable ($OutputDatabasesSum | Select-Object -Last 1) -DisablePaging -DisableSearch -HideButtons -HideFooter -ScrollCollapse -FixedHeader -Style cell-border } } New-HTMLSection -HeaderText 'Database-Details' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable $OutputDatabases -PagingLength 10 -HideFooter -ScrollCollapse -FixedHeader -Style cell-border -Buttons excelHtml5, pageLength, searchPanes { New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Failed' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'FailedAndSuspended' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'NotApplicable' -BackgroundColor orange -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Crawling' -BackgroundColor orange -Color white -ComparisonType string } } } } } } ``` Alternatively you can use `Wrap` property on `New-HTMLSection` ```powershell New-HTML -TitleText 'Database-Reports' -FilePath "C:\temp\test.html" -Online -ShowHTML { New-HTMLSection -HeaderText 'Database-Report' -BorderRadius 15px -Wrap wrap { New-HTMLSection -HeaderText 'Database-Summary' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable ($OutputDatabasesSum | Select-Object -Last 1) -DisablePaging -DisableSearch -HideButtons -HideFooter -ScrollCollapse -FixedHeader -Style cell-border } } New-HTMLSection -HeaderText 'Database-Details' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable $OutputDatabases -PagingLength 10 -HideFooter -ScrollCollapse -FixedHeader -Style cell-border -Buttons excelHtml5, pageLength, searchPanes { New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Failed' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'FailedAndSuspended' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'NotApplicable' -BackgroundColor orange -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Crawling' -BackgroundColor orange -Color white -ComparisonType string } } } } } ``` Status: Issue closed username_0: New-HTMLContainer has no effect. Looks the same as before. With -Wrap wrap the next table is displayed under the first one, but both are very very small now. It looks like that there are few columns now in the first New-HTMLSection and both are displayed only in the first column. ![2020-12-26 21_54_30-192 168 179 43 - Remotedesktopverbindung](https://user-images.githubusercontent.com/73397287/103159049-fbe23480-47c4-11eb-865b-9b1f8dc7b101.png) username_1: Which version are you using? username_0: 0.0.123 username_1: Weird, When I run this ```powershell New-HTML -TitleText 'Database-Reports' -Temporary -Online -ShowHTML { New-HTMLSection -HeaderText 'Database-Report' -BorderRadius 15px { New-HTMLContainer { New-HTMLSection -HeaderText 'Database-Summary' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable ($OutputDatabasesSum | Select-Object -Last 1) -DisablePaging -DisableSearch -HideButtons -HideFooter -ScrollCollapse -FixedHeader -Style cell-border } } New-HTMLSection -HeaderText 'Database-Details' -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable $OutputDatabases -PagingLength 10 -HideFooter -ScrollCollapse -FixedHeader -Style cell-border -Buttons excelHtml5, pageLength, searchPanes { New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Failed' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'FailedAndSuspended' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'NotApplicable' -BackgroundColor orange -Color white -ComparisonType string New-TableCondition -Name 'DatabaseIndexState' -Operator eq -Value 'Crawling' -BackgroundColor orange -Color white -ComparisonType string } } } } } } ``` I get ![image](https://user-images.githubusercontent.com/15063294/103159722-0a344e80-47cd-11eb-819f-add287de998a.png) You sure you're not modifying something? username_0: sure, here is the code i'm using now for testing. Added some chartbars but not active at the moment. first solve this problem `New-HTML -TitleText 'Database-Reports (Non DAG)' -FilePath "C:\temp\test.html" -Online -ShowHTML { New-HTMLSection -HeaderText 'Database-Report (Non DAG)' -BorderRadius 15px -HeaderTextColor black -CanCollapse -Wrap wrap { New-HTMLSection -HeaderText 'Database-Summary' -HeaderTextAlignment center -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable ($OutputDatabasesSum | Select-Object -Last 1) -DisablePaging -DisableSearch -HideButtons -HideFooter -ScrollCollapse -FixedHeader -Style cell-border } } New-HTMLSection -HeaderText 'Database-Details' -HeaderTextAlignment center -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable $OutputDatabases -PagingLength 10 -HideFooter -ScrollCollapse -FixedHeader -Style cell-border -Buttons excelHtml5, pageLength -DisableSearch { New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'Failed' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'FailedAndSuspended' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'NotApplicable' -BackgroundColor orange -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'Crawling' -BackgroundColor orange -Color white -ComparisonType string } # New-HTMLPanel -BackgroundColor LightGray -BorderRadius 15px { # New-HTMLChart { # New-ChartBarOptions -DataLabelsEnabled $true -DataLabelsColor black -Type bar # New-ChartLegend -names 'Mailboxes', 'Database-Free-Space', 'Database-Size' -LegendPosition bottom -Color DodgerBlue, green, red # ForEach ($database In $databases) # { # $pf = (Get-MailboxDatabase "$database" | get-mailbox -ResultSize Unlimited).count # $dbsize = $database.DatabaseSize # $dbsizegb = [double]$dbsize/1024/1024/1024 # $dbFreeSpace = $database.AvailableNewMailboxSpace -replace '\([^\)]+\)' # New-ChartBar -Name $database.name -Value $pf, $dbfreespace, $dbsizegb # } # } # } } } }` username_1: ```powershell New-HTML -TitleText 'Database-Reports (Non DAG)' -FilePath "C:\temp\test.html" -Online -ShowHTML { New-HTMLContainer { New-HTMLSection -HeaderText 'Database-Report (Non DAG)' -BorderRadius 15px -HeaderTextColor black -CanCollapse -Wrap wrap { New-HTMLSection -HeaderText 'Database-Summary' -HeaderTextAlignment center -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable ($OutputDatabasesSum | Select-Object -Last 1) -DisablePaging -DisableSearch -HideButtons -HideFooter -ScrollCollapse -FixedHeader -Style cell-border } } New-HTMLSection -HeaderText 'Database-Details' -HeaderTextAlignment center -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable $OutputDatabases -PagingLength 10 -HideFooter -ScrollCollapse -FixedHeader -Style cell-border -Buttons excelHtml5, pageLength -DisableSearch { New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'Failed' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'FailedAndSuspended' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'NotApplicable' -BackgroundColor orange -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'Crawling' -BackgroundColor orange -Color white -ComparisonType string } # New-HTMLPanel -BackgroundColor LightGray -BorderRadius 15px { # New-HTMLChart { # New-ChartBarOptions -DataLabelsEnabled $true -DataLabelsColor black -Type bar # New-ChartLegend -names 'Mailboxes', 'Database-Free-Space', 'Database-Size' -LegendPosition bottom -Color DodgerBlue, green, red # ForEach ($database In $databases) # { # $pf = (Get-MailboxDatabase "$database" | get-mailbox -ResultSize Unlimited).count # $dbsize = $database.DatabaseSize # $dbsizegb = [double]$dbsize/1024/1024/1024 # $dbFreeSpace = $database.AvailableNewMailboxSpace -replace '\([^\)]+\)' # New-ChartBar -Name $database.name -Value $pf, $dbfreespace, $dbsizegb # } # } # } } } } } } ``` This works fine username_1: You can also use invisible section/panels ```powershell New-HTML -TitleText 'Database-Reports (Non DAG)' -FilePath "C:\temp\test.html" -Online -ShowHTML { New-HTMLSection -Invisible { New-HTMLSection -HeaderText 'Database-Report (Non DAG)' -BorderRadius 15px -HeaderTextColor black -CanCollapse -Wrap wrap { New-HTMLSection -HeaderText 'Database-Summary' -HeaderTextAlignment center -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable ($OutputDatabasesSum | Select-Object -Last 1) -DisablePaging -DisableSearch -HideButtons -HideFooter -ScrollCollapse -FixedHeader -Style cell-border } } New-HTMLSection -HeaderText 'Database-Details' -HeaderTextAlignment center -HeaderTextColor black -HeaderBackGroundColor limegreen -BorderRadius 15px -CanCollapse { New-HTMLPanel -BackgroundColor gainsboro -BorderRadius 15px { New-HTMLTable -DataTable $OutputDatabases -PagingLength 10 -HideFooter -ScrollCollapse -FixedHeader -Style cell-border -Buttons excelHtml5, pageLength -DisableSearch { New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'Failed' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'FailedAndSuspended' -BackgroundColor red -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'NotApplicable' -BackgroundColor orange -Color white -ComparisonType string New-TableCondition -Name 'Database-Index-State' -Operator eq -Value 'Crawling' -BackgroundColor orange -Color white -ComparisonType string } # New-HTMLPanel -BackgroundColor LightGray -BorderRadius 15px { # New-HTMLChart { # New-ChartBarOptions -DataLabelsEnabled $true -DataLabelsColor black -Type bar # New-ChartLegend -names 'Mailboxes', 'Database-Free-Space', 'Database-Size' -LegendPosition bottom -Color DodgerBlue, green, red # ForEach ($database In $databases) # { # $pf = (Get-MailboxDatabase "$database" | get-mailbox -ResultSize Unlimited).count # $dbsize = $database.DatabaseSize # $dbsizegb = [double]$dbsize/1024/1024/1024 # $dbFreeSpace = $database.AvailableNewMailboxSpace -replace '\([^\)]+\)' # New-ChartBar -Name $database.name -Value $pf, $dbfreespace, $dbsizegb # } # } # } } } } } } ``` username_0: Copied your code again and now its working. Maybe to late to see the mistake.
umijs/umi
607963750
Title: 脚手架生成的例子less文件导入错误 Question: username_0: <!-- https://github.com/YOUR_REPOSITORY_URL --> ## How To Reproduce **Steps to reproduce the behavior:** 1. 2. **Expected behavior** 1. 2. ## Context - **Umi Version**:3.1.2 - **Node Version**:v12.13.0 - **Platform**:win10*64 ![微信截图_20200428094923](https://user-images.githubusercontent.com/47775301/80437963-9d16a380-8935-11ea-901a-1251faa9ea38.png) Answers: username_0: 脚手架生成的例子less文件导入错误 username_1: @username_0 `tsconfig.json` 里的内容是如下这样么? ```json { "compilerOptions": { "target": "esnext", "module": "esnext", "moduleResolution": "node", "importHelpers": true, "jsx": "react", "esModuleInterop": true, "sourceMap": true, "baseUrl": "./", "strict": true, "paths": { "@/*": ["src/*"], "@@/*": ["src/.umi/*"] }, "allowSyntheticDefaultImports": true }, "include": ["mock/**/*", "src/**/*", "config/**/*", ".umirc.ts"] } ``` 不是,用这个覆盖。重启 vscode试试 username_0: tsconfig.json内容一致,就是脚手架生成的"include"数组里元素有格式化换行了而已,用提供的内容覆盖,重启vscode,问题依旧存在。 username_1: @username_0 给个可复现的repo吧。我这里用`yarn create @umijs/umi-app` 生成的项目没有问题 username_0: 100%复现,我试建几次都是出现同样的报错,钉钉群里也有其他人遇到同样问题。 管理员打开的CMD,执行命令log如下: C:\Windows\system32>D: D:\>cd D:\dev\git-pro\umi-demo-4 D:\dev\git-pro\umi-demo-4>yarn create @umijs/umi-app yarn create v1.22.4 [1/4] Resolving packages... [2/4] Fetching packages... info [email protected]: The platform "win32" is incompatible with this module. info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation. [3/4] Linking dependencies... warning "@umijs/create-umi-app > @umijs/utils > @babel/[email protected]" has unmet peer dependency "@babel/core@^7.0.0-0". [4/4] Building fresh packages... success Installed "@umijs/[email protected]" with binaries: - create-umi-app [#############################################################################################################] 236/236Copy: .editorconfig Write: .gitignore Copy: .prettierignore Copy: .prettierrc Write: .umirc.ts Copy: mock/.gitkeep Write: package.json Copy: README.md Copy: src/pages/index.less Copy: src/pages/index.tsx Copy: tsconfig.json Copy: typings.d.ts Done in 1.24s. D:\dev\git-pro\umi-demo-4>yarn yarn install v1.22.4 info No lockfile found. [1/4] Resolving packages... warning @umijs/preset-react > @umijs/plugin-antd > antd-mobile > babel-runtime > [email protected]: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3. warning @umijs/preset-react > @umijs/plugin-antd > antd-mobile > rmc-list-view > fbjs > [email protected]: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3. warning @umijs/preset-react > @umijs/plugin-test > @umijs/test > jest-environment-jsdom-fourteen > @jest/environment > @jest/transform > jest-haste-map > [email protected]: fsevents 1 will break on node v14+. Upgrade to fsevents 2 with massive improvements. warning umi > @umijs/preset-built-in > @umijs/bundler-webpack > webpack > watchpack > [email protected]: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies. warning umi > @umijs/preset-built-in > @umijs/bundler-webpack > webpack > watchpack > chokidar > [email protected]: fsevents 1 will break on node v14+. Upgrade to fsevents 2 with massive improvements. [2/4] Fetching packages... info [email protected]: The platform "win32" is incompatible with this module. info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation. info [email protected]: The platform "win32" is incompatible with this module. info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation. [3/4] Linking dependencies... warning "@umijs/preset-react > @umijs/plugin-dva > [email protected]" has unmet peer dependency "[email protected]". warning "@umijs/preset-react > @umijs/plugin-dva > [email protected]" has unmet peer dependency "dva-core@^1.1.0 | ^1.5.0-0 | ^1.6.0-0". warning "@umijs/preset-react > @umijs/plugin-antd > antd > [email protected]" has unmet peer dependency "dayjs@^1.8.18". warning "@umijs/preset-react > @umijs/plugin-dva > dva > [email protected]" has unmet peer dependency "react-router@^4.3.1 || ^5.0.0". [4/4] Building fresh packages... success Saved lockfile. $ umi generate tmp Done in 33.82s. D:\dev\git-pro\umi-demo-4>node -v v12.13.0 D:\dev\git-pro\umi-demo-4> username_1: 完蛋,我没有 windows 机器。。。。 username_2: 脚手架生成的代码应该是 ``` import styles from './index.less'; ``` 看起来你的代码少了 `./`是被你删除了,还是?你手动补上试试。 username_1: @username_0 突然想起来,`tsconfig.json` 里,`include` 部分应该再加一条就好了: `"include": ["mock/**/*", "src/**/*", "config/**/*", ".umirc.ts", "typings.d.ts"]` username_2: 我的项目中都没写 include。 username_0: 用这个方案确实解决了,脚手架生成的tsconfig.json 里,include没有typings.d.ts导致的错误 Status: Issue closed username_1: @username_2 我也很奇怪。几天前我也不写 `include`,然后昨天重新创建了一个应用,不写 `include` 就不工作了。所以我才加的。目前还没找到什么原因。 username_3: 同样的问题,同样的 workaround。希望脚手架模板给加上 username_4: 这个方法有效
soehlert/ansible-scribe
363390845
Title: Some defaults not written out Question: username_0: For some reason, there are a few roles where the defaults variables are in the end_vars list before it gets turned into the data for the template, but then nothing is written out. Also can't figure out how to remove some of the empty end_vars.
feathersjs/docs
574889554
Title: Comment: Querying - Search (api/databases/querying.md) Question: username_0: For the Search, can I get an example to make $like work in Sequelize? Answers: username_1: It is already supported. Documentation can be found in the [feathers-sequelize repository](https://github.com/feathersjs-ecosystem/feathers-sequelize). Status: Issue closed
k08045kk/CopyTabTitleUrl
1108732332
Title: Custom Format - remove string from tab title AND copy part of string from URL Question: username_0: ![image](https://user-images.githubusercontent.com/46587193/150244304-efd1612d-3ba0-48bd-b956-1433fce24c6c.png) . Hi toshi, can you help me with this custom format, please :) . - **Tab Title :** Countdowns by ScorpVFX on Envato Elements - **URL :** https://elements.envato.com/countdowns-DZ783Q9 . - ### What I Want to Do : . **1. Copy Tab title :** Countdowns by ScorpVFX on Envato Elements **2. Remove "_on Envato Elements_" from tab title** = Countdowns by ScorpVFX . **3. Copy '_last portion_' of URL after last hyphen :** DZ783Q9 **4. Add result from step 2 + step 3** . **Final Result :** Countdowns by ScorpVFX DZ783Q9 Answers: username_1: In the current version (v2.2.0), there is no function to copy a part of the title string, and URLs cannot be freely processed either. For complex processing, we suggest that you use a text editor or program to process the title and URL after copying them. That's all. username_0: Okay toshi, thanks a lot for this nice time saving Extension🥇 Status: Issue closed
sighjs/sigh
95926857
Title: real world example Question: username_0: Would be great to have a more complex real world example with support for * sass * less * browser-sync * source maps * sitemap.xml * react (jsx to js) Answers: username_1: I'm happy to help with any questions you may have, there are a few examples shown in the main README, on various sigh-plugin READMEs and in [the presentation](http://sighjs.github.io/). jsx is just an option to the babel plugin, there are many examples shown which use this sigh plugin, the options object is passed through directly to the babel compiler API, you can check their API documentation for these options. sass is shown [on the sass plugin page](https://github.com/sighjs/sigh-sass), same story with the options object and their API. less/browser-sync/sitemap.xml may be supported with the gulp plugin compatibility layer, there are currently no sigh plugins for those things. Let me know if you want me to go into more detail on any of those things. username_1: The presentation is best viewed in chrome and in full screen (hit the F button) and navigated with the arrow keys, otherwise you'll miss many of the examples. username_1: Oh I forgot to mention source maps, source maps are just supported out of the box, APIs are provided for plugin writers to make it trivial for them to transitively apply and concatenate source maps. As long as the underlying transformer API the plugin is supporting is capable of generating a one-step source map then the plugin should support it. All existing plugins support source maps and most gulp plugins should out of the box also. username_1: Here's a few more good examples: * https://github.com/username_1/sigh-talk/blob/master/sigh.js * https://github.com/sighjs/sigh-livereload/blob/master/README.markdown * https://github.com/sighjs/sigh/blob/master/sigh.js (sigh's sigh.js used to bootstrap sigh) All the features used in those files are explained in the main documentation and in the documentation for the respective plugins used. username_0: Thanks, already found the presentation. The links are nice to give an idea but are nowhere near a full setup. username_1: Many of them are links to sigh files being used to build various software on github right now, so they are real world examples, perhaps you were thinking more along the lines of a "full-stack" web example? If so I can dig up some examples of full-stack projects. I'm currently using it in a few projects of this nature alongside the jspm package manager. username_0: Yeah - they seemed like simplified examples. But well - if the setup is simple then it's still real-world :) Indeed - a full-stack example would be great. username_1: Maybe they seem simplified because sigh is so expressive ;) If you were to replicate them in gulp/grunt then they would look a lot more substantial. Hehe. username_1: @username_0 The examples I can find together all make a great setup, but each one is missing something (one doesn't have tests, one doesn't use sass/less). I think the best thing is for me to make a yeoman `sigh-fullstack` generator, then also document that example in the README. :rocket: *kill* 2 :baby_chick: username_0: Documentation is certainly an option, too. Are there any drawbacks using gulp plugins? username_1: @username_0 sorry for the delay in respond I've had the flu. All gulp plugins that are single stream transformers are supported and you shouldn't find any drawbacks. Gulp plugins that supply multiple methods that need to be used together are not supported. username_1: Added browser-sync plugin: https://github.com/sighjs/sigh-browser-sync username_0: Great! I am still failing to wrap my head around the plugin API though. I guess implementing `less` I could probably my looking at the `sass` plugin. Same for `markdown` support. Where I am lost is templating (e.g. swig) and passing in collections to the templates (like https://github.com/segmentio/metalsmith-collections) which could also be used for rss/atom feeds and the sitemap.xml. username_0: I've been trying the following { "devDependencies": { "sigh": "^0.12.18", "sigh-babel": "^0.11.5", "gulp-markdown": "^1.0.0" }, "dependencies": { } } module.exports = function(pipelines) { pipelines.md = [ glob('posts/**/*.md'), markdown(), write('build') ] pipelines.js = [ glob('src/**/*.js'), babel(), write('build/assets') ] pipelines.alias.build = [ 'js', 'md' ] } but when running I get $ sigh -w sigh-test/node_modules/sigh/src/Event.js:83 this._sourceMap.sourcesContent = [ this.sourceData ] ^ TypeError: Cannot set property 'sourcesContent' of undefined at _default._createClass.get (sigh-test/node_modules/sigh/src/Event.js:83:37) at sigh-test/node_modules/sigh/src/gulp-adapter.js:50:32 username_0: Same with the `gulp-marked` plugin. username_1: Instead of: `glob('src/**/*.js')` you probably want `glob({ baseDir: 'src' }, '**/*.js')` otherwise the `src` directory will be used in the output folder. As for the error you're getting there... sorry about that, I just fixed it and release version `v0.12.19`. It's another issue to do with supporting files where identity source maps cannot be calculated. Let me know if there's anything you else standing in your way, I'm happy to provide as much support as you need! Status: Issue closed username_1: Oops didn't mean to close it, I'll have a full-stack example ready for you next weekend. username_1: Would be great to have a more complex real world example with support for * sass * less * browser-sync * source maps * templates (swig etc.) * markdown * sitemap.xml * react (jsx to js) username_1: It's definitely worth reading a tutorial on functional reactive programming if you haven't already, especially one on `Bacon.js`, the FRP library sigh uses. It takes a different kind of mindset to how you might be used to working with javascript. More like writing Haskell. The existing plugins also use Promises a lot, you should definitely be aware of [composition of promises](https://gist.github.com/domenic/3889970). The costs of learning FRP are higher but once you know it the rewards you can gain back in writing neater, shorter, more understandable and maintainable code are worth it. username_0: Awesome. That release got me much further. I am not new to reactive programming. For example I fell in love with `ReactiveCocoa` quite a while ago - but I will have a closer look into `Bacon.js` as well. The intro at https://github.com/sighjs/sigh/blob/master/docs/writing-plugins.md is a good start and looking at https://github.com/sighjs/sigh-sass/blob/master/src/index.js it's quite clear what happens - but it seems to be working completely on a single file level. So I looked at https://github.com/sighjs/sigh/blob/master/src/plugin/concat.js and got a bit lost. What should be trivial seems like quite some work - and frankly I just don't get https://github.com/sighjs/sigh/blob/master/src/plugin/concat.js#L15 with the `opTreeIndex` for example. And why is there a collection of events here https://github.com/sighjs/sigh/blob/master/src/plugin/concat.js#L21? Is this what is getting buffered with the `debounce`? And how could a plugin provide input to another plugin? For example providing a collection (like blog posts) to a template engine. username_1: Plugins forward events down the tree, plugins further down the tree receives events from plugins before them in the tree. e.g. ```javascript [ glob('*.js'), babel(), write() ] ``` glob sends events to babel, babel receives those events and transforms them, then write receives the transformed events. You can also use `merge` to fork the tree and recombine events, `filter` to remove events. And `pipeline` to connect events together. username_1: About `opTreeIndex` ```javascript [ merge( glob('*.js'), glob('client/*.js') ), concat() ] ``` Events from `glob(*.js)` have `opTreeIndex` of `1`, events from `glob(client/*.js)` have `opTreeIndex` of `2`. Therefore `concat` knows to order files matching the pattern `*.js` before the files matching the pattern `client/*.js` when it concatenates all the files. username_1: BTW everything I've mentioned in my last two messages can also be found in [the plugin writing guide](https://github.com/sighjs/sigh/blob/master/docs/writing-plugins.md). username_0: Well, without knowing a bit more about how `sighjs` works under the hood this explanation "depth-first index of operator within pipeline tree. This can be written to in order to this to set the treeIndex for the next pipeline operation otherwise it is incremented by one." wasn't quite that clear - your two messages where much(!) better in explaining it. OK - I think I got the general idea how it works down now :) username_0: About passing down information. The general passing down of events is clear - but let's say I wanted to implement something like https://github.com/segmentio/metalsmith-collections then I would have a plugin globbing files as input, the plugin would have to cache the collection (and keep it sorted) and on changes would pass on a special event with the sorted collection of metadata as content. The events from the collection and from the normal template pipeline somehow have to be merged to make them available to the template plugin. Did I get that somewhat right? username_1: You can pass whatever you want down the stream, as long as the plugins further down the stream can deal with that payload. In most cases you'd want the payload to be an array of Event objects, but it doesn't have to. You could also attach new fields to the Event objects containing whatever metadata you need. These fields could even all reference the same object, allowing a subsequent plugin to read the metadata from any event. username_1: It might even be worth introducing a new `CustomEvent` type also, each one would contain a `tag` (detailing what kind of event it is) along with a custom data object. This might be easier for plugins to ignore by default. username_0: As long as they are passed by reference the collections could be set as metadata on every event - but I am not sure that really works well. Let's say I have a blog post and change just the date. The only thing that really changes is the order in the collection. As the dependency is on the collection - what event should be passed to trigger all dependent updates? username_1: Well when you change that file you'd probably want to pass an event representing that change down the stream anyway, so it doesn't seem so bad to attach the metadata to it. Then again maybe you don't care about forwarding the input events, in which case you could have the plugin only pass on custom metadata events? username_0: Then let's have a look at an example. Here is a snippet from a metalsmith pipeline .use(collections({ posts: { pattern: 'posts/**', sortBy: 'date', reverse: true } })) .use(feed({ collection: 'posts', destination: 'feed.xml' })) It should be obvious what's going on there. We define a collection of blog posts and the feed gets a reference to it. Same goes for a template that could be an archive page: <article> <ul> {{#each collections.posts}} <li> <h3>{{this.title}}</h3> <article>{{this.contents}}</article> </li> {{/each}} </ul> </article> What happens if you attach the collection as meta data to the event? So let's say I change the date of a blog post and save. The save event trickles down the `sighjs` pipeline. The relevant blog post page gets updated because there is an easy reference from the event. What about the archive page though? The archive page is only connected to the collection. username_1: How about: ```javascript pipelines.blog = [ glob('*.md'), collections(...), merge( feed('posts'), archive('posts') ), write('build') ] ``` So the `collections` only passes special metadata events down the stream, then subsequent plugins can turn that metadata into `Event` objects representing files to be written. In this case the `merge` forwards the same metadata to two plugins, then merges the output of these plugins back together to send down the stream. username_1: Or instead of `archive` maybe it should be something like `collectionTemplate(...)` which could take both normal `Event` objects as well as `metadata` objects, it would then apply the `metadata` to the `Event` objects via the template system. username_1: Like um: ```javascript pipelines.blog = [ glob('*.md'), collections(...), merge( feed('posts'), [ glob('templates/*.html'), collectionTemplates() ] ), write('build') ] ``` This relies on the fact that `glob` plugins also forward their inputs down the stream in addition to creating events based on the supplied pattern(s). username_0: The `feed` is generated solely based of the special events generated by the collections plugin. The `archive` would really be a `template` plugin that should react to several events. Mainly the template file and then the collection (plus there might also be reference inside the template to other template files/partials). You were fast than me :) but yeah the last one looks about right username_0: Probably a little more like this though: pipelines.blog = [ glob('*.md'), collections({ posts: '*.md' }), merge( feed('posts'), [ glob('templates/*.swig'), template('post.swig', { collection: resolve_collection('posts'), something: 'bla' }) ] ), write('build') ] Not sure how to make the reference to the collection clear. Explicit as parameter is ugly too: template('post.swig', [ 'posts' // collections ],{ something: 'bla' // other context values }) username_1: You pass in `post.swig` as like, the "master" template name? Then the other events I guess would be used for references `posts.swig` makes to other templates? ```javascript pipelines.blog = [ glob('*.md'), collections({ posts: '*.md' }), merge( feed('posts'), [ glob('templates/*.swig'), template('post.swig', { collection: 'posts', context: { ... } }) ] ), write('build') ] ``` username_1: Or name them all: ```javascript template({ root: 'posts.swig', collection: 'posts', context: { ... } }) ``` You could also use `baseDir` in your template glob depending on how you want to structure your built directory tree. username_0: Exactly. All posts should use the `post.swig` template. If we had the full dependency tree the glob to `*.swig` would not be necessary. Since don't have that information - write on all changes. username_0: `root` feels wrong - although I understand where you are coming from. It's really just the template (that might or might not include other templates/partials). Given that the `context` and the `collection` would need to be merged into a single context anyway (as that is a common interface for template engines) it feels quite ugly to use the explicit naming approach. This is what it should look like for the template: { collection: { posts: [ { title: "", date: "", excerpt: "", ... }, { title: "", date: "", excerpt: "", ... } ] }, something: 'bla' } username_0: btw: I am working on a comparison project over here https://github.com/username_0/site-boilerplate Still debugging the gulp stuff and still have to finish up the metalsmith setup. If you'd work out with the sighjs setup - that would be fantastic. Should at least give a baseline for the full-stack example. username_1: Oh I've been working on this all weekend actually, I meant to give you a status update! Here's an example full-stack sigh file: ```javascript var merge, env, pipeline, debounce, select, reject var glob, concat, write, babel, uglify, process, sass, browserSync, mocha module.exports = function(pipelines) { pipelines.alias.build = [ 'client-js', 'css', 'html', 'server-js' ] // client side: pipelines['client-js'] = [ glob({ basePath: 'client' }, '*.js'), babel({ modules: 'system' }), env( // TODO: use sigh-jspm-bundle instead [ concat('app.js'), uglify(), ] 'production' ), write({ clobber: '!(jspm_packages|config.js)' }, 'build/client') ] pipelines.css = [ glob({ basePath: 'client' }, '*.scss'), sass(), write('build/client') ] pipelines.html = [ pipeline({ activate: true }, 'client-js', 'css'), glob({ basePath: 'client' }, '*.html'), // in development mode also inject the browser-sync enabling fragment env( glob('lib/browser-sync.js'), 'development' ), debounce(600), // TODO: inject css paths and browser-sync fragment into html here using `sigh-injector` select({ fileType: 'html' }), write('build/client') ] pipelines['browser-sync'] = [ pipeline('html', 'css', 'client-js'), browserSync({ notify: false }) ] // server side: pipelines['server-js'] = [ glob({ basePath: 'server' }, '*.js'), babel({ modules: 'common' }), write({ clobber: true }, 'build/server') ] pipelines['server-test'] = [ pipeline('server-js'), pipeline({ activate: true }, 'mocha'), ] pipelines.explicit.mocha = [ mocha({ files: 'build/**/*.spec.js' }) ] pipelines['server-run'] = [ pipeline('server-js'), reject({ projectPath: /\.spec\.js$/ }), process('node build/server/app.js') ] } ``` The idea is to use HTTP2 for development, that way you can split up and cache individual files without the performance delays of the HTTP1 headers. I just wanted to add three more things before turning it into a yeoman generator: * Need to write `sigh-injector` for injecting stylesheet links, and also to inject all javascript modules into SystemJS' depcache (so all the JS module HTTP2 requests happen in one go). * A `sigh-jspm-bundle` plugin would be better than the concatenate + uglify step this version does. * Add client side testing with `karma`\`protractor` + mocha to example. username_1: BTW maybe you're aware but your gulp example there would be horribly inefficient as it rebuilds the entire system on stream changes. You'd need to use `gulp-cached`, `gulp-remember` and also probably `gulp-order` to fix it. This is one of the things that really cheesed me off with gulp actually. username_1: It's also much more common to alias `require('gulp')` as the variable `gulp` rather than `Gulp`. username_0: @username_1 aware of the horrible re-building behaviour. For now I was just trying to focus on getting the collection thing working. But right now the stream splitting and then merging seems to be a major problem. The `merge2` or `combined-stream2` is not working as documented. The more I work with `gulp` the more I want to run away screaming :) Thanks a lot for the better example! For my needs it's still lacking templating and collections though. Happy to working on it myself - but might need a little guidance. username_1: Merging streams doesn't really work very well in gulp, I raised an issue against gulp showing that the source maps get corrupted when merging streams. The gulp author was really rude though so I abandoned the bug and have no idea if it's been fixed. username_0: At last it's not me then. I don't know if it's the same thing - but for what I am seeing is that when I split the stream subsequent transformations affect both streams (although they should now be separate - according to the gulp folks). Then merging them back is yet another problem. sighjs feels so much better - but especially the collection handling (as we discussed before) seems tricky. For gulp I had to fork most the plugins anyway as they didn't really work as I needed them. So having to write new ones for sighjs is no longer really a con when comparing the two. But the collection thing is really most crucial part to implement the boilerplate. username_0: Hm. I don't understand how to rename/move files yet. There is `changeFileSuffix(targetSuffix)` but that's just not enough. username_1: sigh, like gulp, passes data in memory. You can modify the events or create new events that get passed down the pipeline, then the `write` plugin will write the events it receives to the fs according to the `projectPath` field of each `event` it receives. You can also use `seject`/`reject` to filter events before they reach the `write` as shown in the example. username_1: ``` glob --(turns files in the fs into Events)--> transform --(modifies or creates new events)--> write (writes events to the fs) ``` username_0: I see so changing the `projectPath` (that attribute name wasn't obvious) https://github.com/sighjs/sigh/blob/master/src/Event.js#L48 should do it. username_1: Yeah, the `path` is made up of `basePath` + `projectPath`. You used to only be able to set the `path` and the other two were read only properties, since `0.12.20` you can set any of them and the others will update. username_0: Looking at your example: pipeline('server-js'), pipeline({ activate: true }, 'mocha'), What does it mean in terms of order when there are two pipelines referenced like that? What is the "activate" flag? Is there a way to branch a pipeline into two? username_0: My first attempt for the boilerplate site: module.exports = function(pipelines) { pipelines['posts:md'] = [ // glob('design/layouts/*.swig'), // only to trigger updates - but how? glob({ basePath: 'content/posts' }, '/**/*.md'), marked(), ext('html'), collect('posts'), template({ template: 'post.swig', layouts: 'design/templates', collection: 'posts', context: {} }), debounce(), // so the feed does not gets written for every post on inital startup feed({ collection: 'posts', filename: 'feed.xml' }) ] pipelines['pages:md'] = [ glob({ basePath: 'content' }, '/**/*.md'), reject("posts"), // already processed by posts:md marked(), ext('html'), template({ template: 'page.swig', layouts: 'design/templates', collection: 'posts', context: {} }) ] pipelines['pages:swig'] = [ // glob('design/layouts/*.swig'), // only to trigger updates - but how? glob({ basePath: 'content' }, '/**/*.html.swig'), template({ layouts: 'design/templates', collection: 'posts', context: {} }), ext('html') ] pipeline['content'] = [ merge( pipeline('posts:md'), // populates the collection pipeline('pages:md'), // needs the collection (or is this concurrent?) pipeline('pages:swig'), // needs the collection (or is this concurrent?) ), sitemap({ filename: 'sitemap.xml' }), write('build') ] pipelines['assets'] = [ merge( [ glob({ basePath: 'design' }, '**/*.less'), less() ], [ glob({ basePath: 'design' }, '**/*.scss'), sass() ], [ glob({ basePath: 'design' }, '**/*.js'), babel() ] [ glob({ basePath: 'content/posts' }, '/**/*.(jpg|png)'), // copy post assets (all but md) ] ), write('build/assets') ] pipelines.alias.build = [ 'content', 'assets' ] } Various plugins are still missing - but some comments or suggestion much welcome. Especially on the collection thing. username_1: `merge` is used to split a pipeline in two. The two pipelines in a row would forward the output from the first pipeline as the input of the next pipeline. Usually a pipeline has no input except the empty array `[]` that is used to activate it. That is unless you use the `pipeline` operation to connect it to another pipeline's stream. Although in some cases you use `pipeline` in a way where it can only give output, in that case no dependency is formed between the pipelines. I'd name pipelines `like-this` rather than `like:this`... `grunt` uses `:` as an argument separator and sigh may do the same in the future. I wonder if you intended to copy the `scss` and `less` files etc. to your build directory? Don't you only need the built assets in your build directory... the source maps use the `sourceContent` attribute so you don't need an accessible copy of the sources on the server. I also think it's usually better to include the `write` as part of the pipeline, rather than using it at the end in a merged stream. This way a pipeline can pass on events that refer to written resources if referenced by another pipeline. That's usually what you want, although maybe not in this case. The `// so the feed does not gets written for every post on inital startup` comment actually isn't right... the pipeline will stay open until it is closed by exhausting all source globs when not run with `-w`. The `debounce` will only really stop subsequent events happening more frequently which can avoid a bit of wasted work (usually at the cost of some time). The `debounce` is then disabled during the "file update" stage when not using `-w`. The `// glob('design/layouts/*.swig'), // only to trigger updates - but how?` comment... I don't really think it should work like that. Any plugin that uses file data should ideally use it via the event stream, that shouldn't matter whether things are updating or not. It seems to me you think some of these plugins should load data directly from the filesystem. While that is okay for some plugins it should generally be avoided and reading file data from `Event` objects should be used. Then you don't have to worry about things like that. username_1: So like this: ```javascript template({ template: 'post.swig', layouts: 'design/templates', collection: 'posts', context: {} }), ``` Rather than loading layouts directly from the filesystem at `design/templates` it should read these layouts from the event stream. You can give the plugin some kind of selector parameter so that it knows how to discern things that are a `layout` from things that are not. username_1: ```javascript glob({ basePath: 'content' }, '/**/*.md'), reject("posts"), // already processed by posts:md ``` The `reject` criteria wouldn't work there. I think instead you should maybe use a more accurate glob pattern. I think something like `!(posts)/**/*.md` might work. username_1: Oh and if you use "```javascript" as the first line of your code sections you can get syntax highlighting in your git comments. username_1: One other thing it's `**/*.js` not `/**/*.js`. I'm not sure about this `collection` and `template` thing... if they are always work together couldn't it just be a single plugin instead of two? username_0: Well, it would be against the speration of concerns. It would again mean separating the kind of events. And most importantly (what I haven't looked into) being able to wait for all events to arrive before applying the template. That sounds a bit tricky given the nature of reactive pipelines. username_1: You can just use debounce to wait for all events if you need to, but you can put that before the two plugins. I would only make `collect` and `template` two plugins if you can see the output of `collect` ever being useful to another plugin. I apply the criteria of separation of concerns more as a coding principle than something that applies at a library/plugin level. username_1: Ah I see about point one I was missing the `sass` etc. plugins, I thought they were passing the files right through for some reason. Sorry for confusing you! username_0: Ah - OK. Then I didn't get it all wrong :) `reject` was a bit more like pseudo code. I guess it would need to check against the `projectPath`. As for `collect` and `template` - `feed` would also use the collection to build the atom feed. As for the `debounce` and the collection - so something like this would already make sure the template sees the full collection? collect(...) debounce(...) template(use all collected) Does the plugin get any callback on the completion? I would need to sort the collection, too. username_1: Completion of what? username_0: So how I understood it initially: On the pipeline startup the glob is producing events until the match is exhausted. When there is no debounce they will just flow through the pipeline immediately. When there is a debounce it will buffer the events and send them in larger batches. Hence my original debounce where you said I got it wrong. Re-reading it sounds like on startup all events are being passed in one large batch and the debounce is only for subsequent events. Is that correct? I was wondering if there is a callback for the first glob/pipeline exhaustion. Thinking about it this is probably not needed. username_1: So each glob pattern sends events in batches, but multiple plugins send their events down the stream on their own schedule. The `debounce` only affects events before the `watch` phase starts, the debounce changes to a pass-through after it sees the first event that came from a watched file updating. So you basically got it right, just not realising that `glob` can initially batch! username_1: * `glob(*.md)` -> sends an array of all files matching `*.md` as first event. * `merge( glob(*.md), glob(*.html) )` -> sends two items down the stream, each being an array of matching events. In that case they will be basically a few milliseconds apart at most, if you have different transforms chained off of each glob then the time between the two payloads can be longer. username_1: Something that can also work is `[ glob(*.md), glob(*.html), write(...) ]`. This is because glob not only produces events but forwards the events it receives. In that case, you'd get a single payload as glob will forward the first input it receives along with the initial bunch of matched files. This is really great as it lets you avoid using `debounce` which can be slower (despite being slightly less parallel). username_0: I took a step back and tried to express the blog model. Still just some pseudo code - and frankly speaking I hope there is a better way. It doesn't look very maintainable like this: module.exports = function(pipelines) { pipelines['post_md_collection'] = [ glob({ basePath: 'content/posts' }, '/**/*.md'), frontmatter(), marked() ] pipelines['post1'] = [ merge( pipeline('post_md_collection'), pipeline('post_design') ), template(reference to post_md_collection and post_design), write() ] pipelines['page1'] = [ merge( glob({ basePath: 'content' }, '/**/*.html.swig'), pipeline('post_md_collection'), pipeline('page_design') ), template(reference to post_md_collection and page_design), write() ] pipelines['page_md_collection'] = [ glob({ basePath: 'content' }, '/(!post)**/*.md'), frontmatter(), marked(), ] pipelines['page2'] = [ merge( pipeline('page_md_collection'), pipeline('page_design') ), template(reference to page_md_collection and page_design), write() ] pipelines['feed'] = [ pipeline('post_md_collection'), feed(), write() ] pipelines['sitemap'] = [ merge( pipeline('page1'), pipeline('page2'), pipeline('post1') ), sitemap(), write() ] [Truncated] write('build/assets') ] pipelines['asset2'] = [ glob({ basePath: 'content/posts' }, '/**/*.(jpg|png)'), write('build') ] pipelines['reload'] = [ merge( asset1, asset2, page1, page2, post1 ), reload() ] } username_1: At the moment you're only taking output from `merge` operations, you could simplify this a lot by remove most of the pipelines and taking advantage of the fact that `merge` also passes input data. That together with `glob` forwarding input events down the stream in addition to generating events from the `fs`. username_0: Could give an example on where you would see this helping? username_1: I wonder if piping everything into a `metalsmith` plugin would be a better idea, metalsmith seems to be a really good static site generator whereas sigh is more of a build system. It would be great if sigh was flexible enough to support both models though but metalsmith seems to have a massive amount of plugins aimed squarely at static site generation. username_1: I don't really get the example, pipeline names like `page1` and `page2` aren't descriptive enough for me to work out what's going on. I'd probably see static site generation as something involving less branches and more specialised plugins... it would be really great if sigh could call out to an embedded metalsmith configuration for that kind of thing. username_0: @username_1 I am more thinking in a signalling way and tried to map that to the sighjs API. Maybe this makes more sense to you? g <- content/posts/**/index.md frontmatter marked -> post collection changed # <- post collection changed <- post design changed template write -> post changed g <- content/**/*.html.swig <- post collection changed <- page design changed template write -> page changed g <- content/(!post)**/*.md frontmatter marked -> page collection changed ? <- page collection changed <- page design changed template -> page changed # <- post collection changed feed write -> feed changed # <- page changed <- post changed sitemap write -> sitemap changed g <- design/templates/base.swig -> base design changed g <- design/templates/post.swig <- base design changed -> post design changed g <- design/templates/page.swig <- base design changed -> page design changed g <- design/**/*.scss sass write -> asset changed g <- design/**/*.less less write -> asset changed g <- design/**/*.js babel write -> asset changed g <- design/**/*.png write -> asset changed g <- design/**/*.ttf write -> asset changed #browsersync <- asset changed <- page changed <- post changed reload username_0: I've also tried to use `metalsmith` (see the example) but it's not really that great either. I had to fork a couple of plugins already because they were not flexible enough. It's sort of working now - but the collection plugin also needs fixing to support the `browersync` style of continues running. And I am not sure that `metalsmith` and `sighjs` are great match. username_1: I see, I'd really like it if `sigh` could work as a static site generator, this in itself does just seem like a kind of build system I suppose. Do you have a github repository that stores your static resources? Maybe looking at the content would help me understand it better, I'm still not really sure about the page1/page2 stuff etc. I've used jekyll though in the past so I understand roughly what's going on. :baby_chick: username_0: Have a look here https://github.com/username_0/site-boilerplate username_2: Hi @username_1 I'm wondering whether you have thoughts now about what would be a good first step to have some static website generation into sigh? Would it be around templating for instance wrapping consolidate.js as metalsmith [does](https://github.com/superwolff/metalsmith-layouts#engine)? Are there some limitations you can think of if trying to pass large data structures (I guess as event metadata) through some pipelines to provide the model data to templates or [file objects with metadata as metalsmith does](https://github.com/metalsmith/metalsmith#how-does-it-work)? The FRP approach of sigh is really appealing and I'm trying to evaluate whether I could use it as the "top level" build tool maybe wrapping other tools. For instance wrapping pandoc in a plugin is [also something metalsmith does](https://github.com/iilab/metalsmith-pandoc) and I wonder if you think this should be fairly easy to achieve with sigh? Thanks for the great work! username_1: Just a matter of writing some plugins, sigh provides an easy way to scaffold plugins and with its multiprocess model the complexity of plugins and/or their underlying libraries shouldn't be a worry. The only major issue with sigh as opposed to alternatives is that it currently doesn't have the momentum/userbase of other build systems, leading to a lack of existing plugins. Technical superiority alone isn't enough to guarantee popularity, maybe I'm just not good enough at marketing :( username_2: Or maybe having a few more plugins will help? If you look at how metalsmith has done it, they've provided examples and base plugins that reproduced features that their users would be likely to want. This helps provide a migration path for those who are shopping around for a technically superior option (like me with trying to let go of metalsmith) :) I understand that your main focus is the build process and in many way it's great to stay focused, but adding a few more use cases might help attract more attention to your work? In any case, I might give a shot at implementing a plugin to get a feel for it and I'l reach out for help when I do! username_1: You raise some really good points, maybe I'll try adding a metalsmith plugin adapter. There is already a gulp plugin adapter but it only works with a limited set of plugins. username_2: A metalsmith plugin adapter would be really cool! :+1: username_1: I'll close this now, please raise more specific issues if there's anything particularly pressing. Status: Issue closed
tidyverse/tibble
330680807
Title: Support for data frame columns Question: username_0: ``` r library(dplyr, warn.conflicts = FALSE) df <- data.frame(x1 = rep(1:3, times = 3), x2 = 1:9) df$x3 <- df %>% mutate(x3 = x2) as_tibble(df) #> Error: All columns in a tibble must be a 1d vector or a list: #> * Column `x3` is data.frame ``` Related to https://github.com/tidyverse/dplyr/issues/3630 Some dplyr code can lead to a tibble with a data frame column and in that case printing and [ is broken: ``` r library(dplyr, warn.conflicts = FALSE) df <- data.frame(x1 = rep(1:3, times = 3), x2 = 1:9) df$x3 <- df %>% mutate(x3 = x2) d <- group_by(df, x1) # looks ok str(d) #> Classes 'grouped_df', 'tbl_df', 'tbl' and 'data.frame': 9 obs. of 3 variables: #> $ x1: int 1 2 3 1 2 3 1 2 3 #> $ x2: int 1 2 3 4 5 6 7 8 9 #> $ x3:'data.frame': 9 obs. of 3 variables: #> ..$ x1: int 1 2 3 1 2 3 1 2 3 #> ..$ x2: int 1 2 3 4 5 6 7 8 9 #> ..$ x3: int 1 2 3 4 5 6 7 8 9 #> - attr(*, "groups")=Classes 'tbl_df', 'tbl' and 'data.frame': 3 obs. of 2 variables: #> ..$ x1 : int 1 2 3 #> ..$ .rows:List of 3 #> .. ..$ : int 1 4 7 #> .. ..$ : int 2 5 8 #> .. ..$ : int 3 6 9 # fails d #> Error in `[.data.frame`(X[[i]], ...): undefined columns selected # whereas as.data.frame(d)[1:3, ] #> x1 x2 x3.x1 x3.x2 x3.x3 #> 1 1 1 1 1 1 #> 2 2 2 2 2 2 #> 3 3 3 3 3 3 # that's just weird. `[.grouped_df` is in dplyr d[1:3, ] #> # A tibble: 3 x 3 #> # Groups: x1 [3] #> x1 x2 x3 #> <int> <int> <data.frame> #> 1 1 1 c(1, 2, 3, 1, 2, 3, 1, 2, 3) #> 2 2 2 1:9 #> 3 3 3 1:9 # but [.tbl_df is in tibble and already does this class(d) <- c("tbl_df", "tbl", "data.frame") attr(d,"groups") <- NULL d[1:3, ] #> # A tibble: 3 x 3 #> x1 x2 x3 #> <int> <int> <data.frame> #> 1 1 1 c(1, 2, 3, 1, 2, 3, 1, 2, 3) #> 2 2 2 1:9 #> 3 3 3 1:9 # and ... d #> Error in `[.data.frame`(X[[i]], ...): undefined columns selected ``` Status: Issue closed Answers: username_0: Duplicate to #416
cornellius-gp/gpytorch
818957630
Title: [Bug] New release initialize MultitaskGaussianLikelihood Question: username_0: # 🐛 Bug Hi and thanks for the great work. Before the new release I used to initialize the value of a MultitaskGaussianLikelihood with ```python likelihood.noise_covar.noise = torch.tensor([0.04]) likelihood.noise = torch.tensor([1e-4]) ``` but now this throws an error saying `likelihood.noise_covar.noise` does not exist and `likelihood.noise` is the wrong size for `num_tasks` > 1. Any idea how I am supposed to set the value of the MultitaskGaussianLikelihood now? See #1303 for the post that advised me to initialize this way in the first place. Thanks! ## To reproduce ```python import torch import gpytorch import logging import math import numpy as np from matplotlib import pyplot as plt class ExactGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood, kernel): train_x = torch.squeeze(train_x) train_y = torch.squeeze(train_y) super(ExactGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = kernel def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) def optimize(self, likelihood, train_x, train_y, training_iter=50, optimizer='Adam'): train_x = torch.squeeze(train_x) train_y = torch.squeeze(train_y) self.train() likelihood.train() if optimizer == 'Adam': optimizer = torch.optim.Adam(self.parameters(), lr=0.1) else: logging.error('Undefined optimizer') mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, self) for i in range(training_iter): # Zero gradients from previous iteration optimizer.zero_grad() # Output from model output = self(train_x) # Calculate loss and backpropagate gradients loss = -mll(output, train_y) loss.backward() optimizer.step() def predict(self, x, likelihood, full_cov=False): self.eval() [Truncated] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/mona/PhD_code/observer_add_GPyTorch_advanced/src/script4.py", line 89, in <module> likelihood.noise_covar.noise = torch.tensor([0.04]) File "/Users/mona/PhD_code/observer_add_GPyTorch_advanced/venv/lib/python3.9/site-packages/gpytorch/module.py", line 414, in __getattr__ raise e File "/Users/mona/PhD_code/observer_add_GPyTorch_advanced/venv/lib/python3.9/site-packages/gpytorch/module.py", line 409, in __getattr__ return super().__getattr__(name) File "/Users/mona/PhD_code/observer_add_GPyTorch_advanced/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 778, in __getattr__ raise ModuleAttributeError("'{}' object has no attribute '{}'".format( torch.nn.modules.module.ModuleAttributeError: 'MultitaskGaussianLikelihood' object has no attribute 'noise_covar' ``` ## System information - GPyTorch version 1.4.0 - PyTorch version 1.7.1 - Mac OS Big Sur Answers: username_1: `MultitaskGaussianLikelihood` uses a different implementation since #1471. The internals should clarify the meaning of the different noises better. You should be able to set things using the `likelihood.noise` and `likelihood.task_noises` instead (you can now set different noise levels for different tasks). . username_1: @username_2 username_2: Sorry for the confusion. I recently refactored `MultitaskGaussianLikelihood` to better exploit Kronecker structure throughout and probably should have done a better job documenting the changes. It looks like I need to add in a couple more setters for the task noise, which I'll setup soon, as well as fixing the setter for the noise. You should be able to set the raw task covariance noises by directly modifying the task noises `likelihood.raw_task_noises.data = likelihood.raw_task_noises_constraint.inverse_transform(torch.tensor([0.04, 0.04, 0.04]))` Similarly, you can set `likelihood.raw_noise.data = likelihood.raw_noise_constraint.inverse_transform(torch.tensor([1e-4]))` username_0: Great, thanks a lot @username_2! username_0: Hi @username_2 @username_1, any news on this pull request?
coopdevs/opensource.guide
289293301
Title: Translate `best-practices.md` to spanish (es-ES) Question: username_0: Translate file `best-practices.md` into spanish language,` es-ES` locale. [ ] Create a branch named `best-practices-es-ES` [ ] Translate the whole file [ ] Send a push request referring this issue<issue_closed> Status: Issue closed
knative/serving
344827510
Title: hack/update-codegen.sh fails on macos with "find: illegal option -- n" Question: username_0: ## Expected Behavior Should run without error. ## Actual Behavior ``` $ hack/update-codegen.sh Generating deepcopy funcs Generating clientset for serving:v1alpha1 istio:v1alpha3 at github.com/knative/serving/pkg/client/clientset Generating listers for serving:v1alpha1 istio:v1alpha3 at github.com/knative/serving/pkg/client/listers Generating informers for serving:v1alpha1 istio:v1alpha3 at github.com/knative/serving/pkg/client/informers find: illegal option -- n usage: find [-H | -L | -P] [-EXdsx] [-f path] path ... [expression] find [-H | -L | -P] [-EXdsx] -f path [path ...] [expression] ``` ## Steps to Reproduce the Problem 1. Run `hack/update-codegen.sh` from the root directory of knative/serving _on macos_. 2. Observe the output. ## Additional Info * `hack/update-deps.sh` also fails - see [separate issue](https://github.com/knative/serving/issues/1708). * https://apple.stackexchange.com/questions/41011/using-the-find-name-command-on-os-x * https://stackoverflow.com/questions/2320564/i-need-my-sed-i-command-for-in-place-editing-to-work-with-both-gnu-sed-and-bsd/20951570 /area test-and-release /kind cleanup Answers: username_0: /assign username_0: A somewhat different fix from @cppforlife: https://github.com/knative/serving/pull/1694. I think it's preferable as it doesn't create unnecessary backup files which then need deleted. I've closed PR 1710 to make this preference clear. Status: Issue closed username_0: Thanks to https://github.com/knative/serving/pull/1694, closing.
night1ynx/mx-linux-l10n-ja
222819162
Title: live-disable-services - 語順を考え直す Question: username_0: [live-disable-services/ja.po](https://github.com/username_0/mx-linux-l10n-ja/blob/master/antix-development.live-disable-services/ja.po) これは推測だが、末尾に %s を表示する項目は %s が長い(項目リストで構成されている)と思われる。 そのため、"XXXサービスを無効化 %s" のように訳した(コロンを省略されていると考えた)が間違っているかもしれない……。
olafur-andri/olafur-andri.github.io
235028560
Title: Enhancement for your profile page Question: username_0: It would be nice if the language container (the dropdown where the user can change the language) closes if you click somewhere on the page. That would be a much better UX experience instead of clicking on the icon again as the only option. You can achieve this with a full `page wrapper` which gets displayed when the `container` is visible and is of course transparent. Just add a simple `EventListener` to it to close the container `onclick`. Feel free to take a look at the codebase for my website if you are wondering how it is done. Nice website by the way... 😉 Answers: username_1: Thank you, Christoph for your feedback! I will definitely look into this as soon as I can. I'll make sure to check out your website for reference. Status: Issue closed
hpc/charliecloud
572430332
Title: /test/dev_proc_sys.py fails on kernels built without CONFIG_STRICT_DEVMEM=y Question: username_0: It appears when a system's kernel is built without CONFIG_STRICT_DEVMEM=y then /dev/mem is not present which is tested by `dev_proc_sys.py` called from `ch-run_uidgid.bats` leading to test failure. ``` ✗ /dev /proc /sys (in test file run/ch-run_uidgid.bats, line 49) `ch-run $uid_args $gid_args "$ch_timg" -- /test/dev_proc_sys.py' failed no ID arguments ERROR /dev/mem: exception: [Errno 2] No such file or directory: '/dev/mem' SAFE /proc/kcore: read not allowed SAFE /sys/kernel/mm/page_idle/bitmap: read not allowed ``` Answers: username_0: It's worth noting both of the systems presenting the issue are aarch64. username_1: Good find. Can you figure out an alternate device that unprivileged users shouldn't be able to read? Ideally it would be broadly available and could simply replace `/dev/mem`, but if we have to try more than one, it's OK too. username_0: @username_1 the device `/dev/cpu_dma_latency` looks like a decent candidate to me. This device is part of the [power management quality of service framework](https://www.kernel.org/doc/html/v5.0/admin-guide/pm/cpuidle.html) in the linux kernel, appears to be ubiquitous, and requires privilege to access. I think trying `/dev/mem` and if it is not found `/dev/cpu_dma_latency` like what is done with the `/proc` test should resolve this. username_1: Sounds good to me. Status: Issue closed
zeroengineteam/ZeroCore
367002531
Title: Zero Editor doesn't use OS's mouse scroll speed settings Question: username_0: # Description Repro === # Set your Vertical Scrolling speed value in Mouse Properties in the Control Panel in Windows to some value other than 1 (it probably already is) # In the Zero Editor open a text file and scroll the mouse Expected === The document scrolls as far as specified in Mouse Properties Happened === The document scrolls one line only # User Data - **UserName**: douglasZwick # Zero Engine Data - **Revision**: 626 - **ChangeSet**: zeroengineteam/zerocore@6b8fe469c6c97053c98289b8c81e3fba0d82ce1e - **Platform**: Win32 - **Build Version Name**: 1.2.0.626 zeroengineteam/zerocore@6b8fe469c6c97053c98289b8c81e3fba0d82ce1e 2017-12-08 Release Win32<issue_closed> Status: Issue closed
emersion/go-openpgp-wkd
1066358007
Title: golang.org/x/crypto/openpgp is deprecated and missing features Question: username_0: golang.org/x/crypto/openpgp has being deprecated this year: https://github.com/golang/go/issues/44226 Is unable to handle ed25519 keys, that are becoming more and more common. It does fail to handle my key for example. go-openpgp-wkd could start using another openpgp implementation. For example ProtonMail seems to maintain one backwards compatible with the official golang one and I tested it and it can handle ed25519 keys: https://github.com/ProtonMail/go-crypto Make go-openpgp-wkd openpgp agnostic. I personally only care about the server implementation, for me changing the `Handler` interface so `Discover` returns `[][]byte` will work. The server doesn't need to know how to read openpgp keys. I'm happy to do a pull-req implementing it if we agree and what is the best way. Answers: username_1: I'm fine with either. I originally used a PGP lib to avoid the classic "is it armored? what PGP packets are returned?" debate. If we go down the "openpgp lib agnostic" route, I'd prefer to return a streaming `io.Reader`. Should be easy enough for callers to pipe that into PGP libs. username_0: I think I like more the openpgp agnostic lib. For me will be useful so I only parse the keys once to store them, but then on each request I just provide the stored value. I like the `io.Reader` approach. I'll give it a try and implement it on top of #7. username_0: I just added the implementation using a reader to #7
akaps/hanabi_ai
381930702
Title: Linter: too-few-public-methods Question: username_0: **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior. Minimum expected steps: 1. Run configuration '...' 2. See error **Expected behavior** A clear and concise description of what you expected to happen. **Additional context** Add any other context about the problem here. Answers: username_0: Resolved with #130 Status: Issue closed username_0: **Describe the bug** We had to disable the rule from the linter, since we violate it in the current project **To Reproduce** 1. remove the rule from the disabled set of rules in `.pylintrc` 2. run `pylint hanabi_ai` **Expected behavior** pylint comes back clean, 10/10 score Status: Issue closed username_0: classes that violate this are intentionally sparse
bloom-housing/bloom
810551300
Title: Update logged out error text Question: username_0: When I leave the application, an error shows asking me to log back in @slowbot can you add a screenshot and new proposed copy? Answers: username_1: How do I handle the translations for a new string? username_0: @username_1 create a new like line item for the new user accounts disabled text, I'll send it over to the translators but it's okay for now if it's not translated username_0: Closed in HousingBayArea Status: Issue closed
cnbluefire/FDSforCU
751269106
Title: 鹤壁技术服务费发票-鹤壁技术服务费发票 Question: username_0: 鹤壁技术服务费发票【徴:ff181一加一⒍⒍⒍】【Q:249⒏一加一357⒌⒋0】经营范围广、项目齐全、劳务、会议、住宿、餐饮、运输、广告、建筑、手撕、建材、钢材等等...一边又有复习的风波,所以我太难了。虽然这次的试卷写完了,但是万一有些题目不太把握还是会错,所以心惊胆战。闲问字,评风月。时载酒,调冰雪。似初秋入夜,浅凉欺葛。人境不教车马近,醉乡莫放笙歌歇。倩双成、一曲紫云回, https://github.com/cnbluefire/FDSforCU/issues/817 https://github.com/cnbluefire/FDSforCU/issues/818 https://github.com/cnbluefire/FDSforCU/issues/819
mozillascience/plan
195912840
Title: Community Call - 2/9/17 Question: username_0: February Community Call checklist! 2/9/16 owner: @username_0 - [x] make etherpad https://public.etherpad-mozilla.org/p/sciencelab-calls-feb09-2016 - [ ] make event on science.mozilla.org - [ ] contact speakers - in pad - [ ] define theme - **Thursday before (1/2/17)** - [ ] schedule tweets [templates](https://docs.google.com/document/d/19P_G3sJVoVv58YviHUlylMR3im18CJ0XblhXVRzjyW0/edit) **Monday before (12/6/17)** - [ ] email speakers a reminder [templates](https://docs.google.com/document/d/19P_G3sJVoVv58YviHUlylMR3im18CJ0XblhXVRzjyW0/edit) **Call** - [ ] have call **Post-call** - [ ] followup mailing list and sum-up via newsletter - [ ] tweet out etherpad - [ ] log in [wiki](https://wiki.mozilla.org/ScienceLab/Calls#Call_archives) - [ ] add [metrics to call participant list](https://public.etherpad-mozilla.org/p/2016-call-participants)<issue_closed> Status: Issue closed
dotnet/sdk
755691044
Title: Forced locked mode (dotnet restore --locked-mode) doesn't work cross platform Question: username_0: ## Details about Problem When running a dotnet restore --locked-mode from a **linux** environment where dotnet restore --force-evaluate was run on **windows** it fails. This makes things very difficult to test cross platform. (Build on a linux agent then run tests on windows). Note: it happens either way (lockfile generate on linux causes windows to fail) NuGet product used: dotnet.exe NuGet version (x.x.x.xxx): ``` ❯ dotnet nuget --version NuGet Command Line 5.4.0.2 ``` dotnet.exe --version (if appropriate): ``` ❯ dotnet --version 3.1.102 ``` OS version (i.e. win10 v1607 (14393.321)): Windows 10 Enterprise: 10.0.18363 and Ubuntu 18..04 (WSL in my case, but repro's on build agents as well) Worked before? If so, with which NuGet version: Unknown ## Detailed repro steps so we can see the same problem 1. git clone https://github.com/jabbera/nuget-locked-mode-bug.git 2. cd nuget-locked-mode-bug 3. dotnet restore --locked-mode # This succeeds as it should 4. bash 5. dotnet restore --locked-mode # This fails with:error NU1004: The packages lock file is inconsistent with 6. dotnet restore --force-evaluate 7. exit 8. git diff # This will let you see the differences between the OS's ## Other suggested things Output of the diff: ``` diff --git a/packages.lock.json b/packages.lock.json index 1d02123..d72a20f 100644 --- a/packages.lock.json +++ b/packages.lock.json @@ -9,7 +9,6 @@ "contentHash": "7D2TMufjGiowmt0E941kVoTIS+GTNzaPopuzM1/1LSaJAdJdBrVP0SkZW7AgDd0a2U1DjsIeaKG1wxGVBNLDMw==" } }, - ".NETCoreApp,Version=v3.1/win7-x86": {}, ".NETFramework,Version=v4.6.2": { "Microsoft.NETFramework.ReferenceAssemblies": { "type": "Direct", @@ -25,7 +24,6 @@ "resolved": "1.0.0", "contentHash": "ONGjkFWduK13lfxUtlEl4+nYwrqDe5NF5f8qRtp5fqWiWYlqft/Ko9ht3e6Secg9y3I1yL8Xnfag/JGOOn0yoQ==" } - }, - ".NETFramework,Version=v4.6.2/win7-x86": {} + } } } \ No newline at end of file ``` _Originally posted by @username_0 in https://github.com/NuGet/Home/issues/9195#issuecomment-737558775_ I can reproduce this issue on `.NET Core Console App` that multi-targets `net472` and a .NET core tfm. I **cannot** reproduce this issue on a `.NET Core Class Library` project with multiple target frameworks. @nkolev92 helped me to find that `.NET SDK` sets runtime identifier to `win7-86` [here](https://github.com/dotnet/sdk/blob/master/src/Tasks/Microsoft.NET.Build.Tasks/targets/Microsoft.NET.RuntimeIdentifierInference.targets). It looks like following comment in the `.targets` file confirms this behavior. `When building a .NETFramework exe on Windows and not given a RID, we'll pick either win7-x64 or win7-x86` ![image](https://user-images.githubusercontent.com/52756182/100943906-fe49bb00-34b2-11eb-9a54-d9ac457c3cfa.png) Transferring to `dotnet/SDK` team for feedback. https://github.com/NuGet/Home/issues/9195#issue-567828855 Answers: username_1: @username_0 it looks like this is a nuget bug and a work around was provided in the discussion on that issue: https://github.com/NuGet/Home/issues/9195#issuecomment-777153274. If you have further feedback, please provide it on that issue, thanks! Status: Issue closed
zzzprojects/EntityFramework-Extensions
961322389
Title: Oops! The method 'GetAllPath' failed with error: more than one table has been found Question: username_0: ### Description Hi, using BulkSaveChangeAsync(), trying to save a few operations (insert), i get an exception. The mentioned table is a view. builder.ToView("view_catalog_entries"); builder.HasKey(e => e.Id); With the EF SaveChangesAsyn(), no error. Issue under v 5.1.5, updated to 5.2.6 with no improvement (but better error message :)) ### Exception An exception resulting in an error 500 ("Internal Server Error") was encountered. System.Exception: Oops! The method 'GetAllPath' failed with error: more than one table has been found entityTypeNavigation.TargetEntityType.Name = *****.ViewCatalogEntry at Z.EntityFramework.Extensions.EntityTypeZInfo.?(IEntityType ?) at Z.EntityFramework.Extensions.EntityTypeZInfo.GetFlattenHierarchyProperties(IEntityType entityTypeMaster) at Z.EntityFramework.Extensions.EntityTypeZInfo..ctor(IEntityType entityType) at PublicExtensions.?.?(IEntityType ?) at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory) at PublicExtensions.ToZInfo(IEntityType entityType) at ?.BulkUpdate[T](DbContext this, IEnumerable`1 entities, Action`1 options, Boolean isBulkSaveChanges) at ?.?(DbContext this, List`1 ?, Action`1 ?) at ?.?(DbContext this, StateManager ?, IReadOnlyList`1 ?, Action`1 ?) at ?.?(DbContext this, StateManager ?, IReadOnlyList`1 ?, Action`1 ?) at ?.?(DbContext this, Action`1 ?, DbContext ?) at DbContextExtensions.BulkSaveChanges(DbContext this, Action`1 options) at DbContextExtensions.?.?() at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state) --- End of stack trace from previous location --- at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread) --- End of stack trace from previous location --- at DbContextExtensions.BulkSaveChangesAsync(DbContext this, Action`1 options, CancellationToken cancellationToken) ### Further technical details - EF version: 5.0 - EF Extensions version: 5.2.6 - Database Provider: MariaDb Answers: username_0: Edit: Changing from `builder.ToView("...");` to `builder.ToTable("...");` fix the problem. However this seems not normal, and will probably product some adverse event somewhere else on my code. Thanks to investigate username_1: Hello @username_0 , Thank you for reporting, we will look at it. We indeed don't have a lot of test with the `ToView`, so we will look at it. Best Regards, Jon username_1: Hello @username_0 , My developer failed to reproduce your exact error. Is it possible to provide a runnable project or even some part of your code to help him to reproduce it? Here is what he asked: - The view - The entity - The navigation between all entities and how they are set (`ViewCatalogEntry` look to be the more important one) The best is of course a runnable project (you can send it in private here: <EMAIL>) but if you cannot, try to give us as much information as you can and he will try again. username_1: Hello @username_0 Since our last conversation, we haven't heard from you! As mentioned in my previous message, could you provide a runnable project or parts of your code to help him to reproduce the issue? Here is what he asked: - The view - The entity - The navigation between all entities and how they are set (ViewCatalogEntry look to be the more important one) Don't hesitate to provide a runnable project in private: <EMAIL> Looking forward to hearing from you, Jon username_0: Hello Jon, Yes, i will try to reproduce is a small project asap username_1: Awesome! I will be looking forward to hearing from you, Jon username_1: Hello @username_0 , Did you make any progress in reproducing this issue in a standalone project? Best Regards, Jon Status: Issue closed username_1: Hello @username_0 , Since our last conversation, we haven't heard from you! Unfortunately, this issue will be closed. However, we will be glad to open it once you can provide a standalone project. Best Regards, Jon
deepset-ai/FARM
815241802
Title: Simplify tasks and connect_heads_with_processor usage/documentation Question: username_0: The usage of tasks and how we need to connect PredictionHeads with Processors is non-obvious and needs to be simplified or better documented in example scripts for Multi Task learning (e.g. LM finetuning). E.g. see error message in AdaptiveModel: "Label_tensor_names are missing inside the {head.task_name} Prediction Head. Did you connect the model" " with the processor through either 'model.connect_heads_with_processor(processor.tasks)'" " or by passing the processor to the Adaptive Model?" In example lm_finetuning we neither call connect heads nor pass the processor to the adaptive model.
danielgerlag/workflow-core
1095389879
Title: How to get structure of workflow? Question: username_0: Hi, it's possible to get the structure of workflow? For example: `{ "Id": "HelloWorld", "Version": 1, "Steps": [ { "Id": "Hello", "StepType": "MyApp.HelloWorld, MyApp", "NextStepId": "Bye" }, { "Id": "Bye", "StepType": "MyApp.GoodbyeWorld, MyApp" } ] }` It's possible to get the structure, with inputs from workflow? Answers: username_1: Hi @username_0 , yes, there is a way. Get a dependency of `IWorkflowRegistry`, and assuming you keep it in a variable called `workflowRegistry`, you could do: var def = workflowRegistry.GetDefinition("HelloWorld", 1); var s = new { Id = def.Id, Version = def.Version, Steps = def .Steps .Select(s => new { Id = s.Id, StepType = s.BodyType.FullName }) .ToList(), }; Note that here, the `Id` prop of each step will be an `int`, unlike things like `"Hello"` and `"Bye"` you've included as examples. I assume you want to include a human-readable text to describe that step. If that's what you want to do, then you must use `.Name("Hello")` in the workflow builder to give the steps you are interested a name. Then, you could replace `s.Id` with `s.Name` to get that name. If you don't use the `.Name("...")` method in the builder, this will be null.
symisc/unqlite
348838310
Title: Question: Is it possible to store binary data in a Document? Question: username_0: The title says it all! Storing raw binary data in a key value store works well, but I'm having trouble figuring out how to store raw bytes in a document. How would one go about doing this? For what it's worth, I'm using the Python client library. Thanks! Answers: username_1: Yes, you could store raw bytes (i.e. Blobs) on a document store database using the **_base64_encode()_** Jx9 built-in routine. Decoding is done via the built-in base64_decode() routine. Needless to say, we do recommend that you rely on the standard [key/value store interfaces](https://unqlite.org/c_api/unqlite_kv_store.html) if you are dealing with binary data. It is safer and **faster**. The built-in Jx9 routines are documented here https://unqlite.org/jx9_builtin.html. Status: Issue closed
simonedelmann/crud-kit
777685393
Title: app.crud for longer endpoint Question: username_0: I'd like to get responses on e.g. "/site/api/todos". And it seems it's not possible to do it right now, cause "app.crud(...)" accepts only one string. I thought about something like this: `extension RoutesBuilder { public func crud<T: Model & CRUDModel>(_ endpoints: [String], model: T.Type, custom: ((RoutesBuilder, CRUDController<T>) -> ())? = nil) where T.IDValue: LosslessStringConvertible { let endpoint = endpoints.last! let modelComponents = endpoints.map{endpoint in PathComponent(stringLiteral: endpoint)} let idComponents = modelComponents + [PathComponent(stringLiteral: ":\(endpoint)")] let routes = self.grouped(modelComponents) let idRoutes = routes.grouped(idComponents) let controller = CRUDController<T>(idComponentKey: endpoint) controller.setup(self, on: endpoint) custom?(idRoutes, controller) } }` But I can't use it, because CRUDController<T> is internal. Answers: username_1: Thanks for your issue! If I understood correctly, you can - and should - use Vapors route groups for this. (See [here](https://docs.vapor.codes/4.0/routing/#route-groups)) ```swift let api = app.grouped("site", "api") api.crud("todos", ...) ``` Anyhow I can make `CRUDController` public if you want to extend `RoutesBuilder` for this. Feel free to make a PR! username_1: @username_0 Can I close this issue?
pycontribs/jira
155702566
Title: logging into Jira via script Question: username_0: Working from documentation at http://jira.readthedocs.io/en/latest/examples.html#quickstart Have python script of from jira import JIRA from collections import Counter options = { 'server': 'http://centos7:8080', 'basic_auth': ('user1', '<PASSWORD>')} jira = JIRA(options) Whether the password is correct or not, running the script gives no output and the errorlevel is 0. If I add to the script and try to get an issue then I 401 error "You do not have the permission to see the specified issue.". Which is fair enough if I have not logged in properly. Does anyone know what I am doing wrong? Regards, John Status: Issue closed Answers: username_0: jira_options={'server': 'http://centos7:8080'} jira=JIRA(options=jira_options,basic_auth=(user1,password)) This did work
Anuken/Mindustry-Suggestions
788407975
Title: Map Editor: Publish map as new Workshop Item/ unlink from old Workshop Item Question: username_0: **Describe the content or mechanics you are proposing.** Right now if you were to make a variant to a map, you can not publish it independently to the steam workshop, you can only override the old one. If the game thinks your map is already linked to a workshop item, it is impossible to publish it without overriding the existing workshop item. So I would like an option in the map editor to unlink your map from a workshop entry, or to publish your map as a new workshop entry instead of overwriting an old one. **Describe how you think this content will improve the game. If you're proposing new content, mention how it may add more gameplay options or how it will fill a new niche.** I have this problem with one of my maps, where I created a survival map and published it to the workshop. Later I used the in-editor function to copy the map, create a new one with a different name and paste the old map into it, so I could alter it to be an attack map, without messing with my old survival map. I changed the layout a bit, changed the player spawn, added enemy defences as well as ores. But when I was done I noticed I can not actually publish this map to the workshop, instead I can only overwrite the workshop listing for my old survival one. Right now I m in this position: In game I have both my attack map and my survival map listed as completly seperate entities I can select and play on. But if I enter either map in the Editor, they both only have the "Edit Workshop Listing" button linked to the same workshop entry. An option as proposed here would allow for a workaround for whenever the editor falsely links your map to an already existing one you dont want to override. **Before making this issue, place an `X` in the boxes below to confirm that you have acknowledged them.** *Failure to do so may result in your request being closed automatically.* 1. - [X] I have done a quick search in the list of suggestions to make sure this has not been suggested yet. 2. - [X] I have checked the [Trello](https://trello.com/b/aE2tcUwF/mindustry-trello) to make sure my suggestion isn't planned or implemented in a development version. 3. - [X] I am familiar with all the content already in the game or have glanced at the wiki to make sure my suggestion doesn't exist in the game yet. 4. - [X] I have read `README.md` to make sure my idea is not listed under the "A few things you shouldn't suggest" category. Answers: username_1: I have had this problem with several survival-to-attack/survival conversions, and see no reason not to add a "Unlink Workshop" button, maybe in the "Upload to Workshop" dialog. username_2: You may be able to work around this by exporting the map to a file and then reimporting it. username_0: I tried that and sadly it does not work. After importing, the editor always thinks it is linked the old workshop item. I think the information about what workshop item it is linked to is saved in the mapfile you export. username_2: I face a similar problem with Steam Workshop schematics, because the workshop ID is saved to schematic files/codes as well. At least that is easy to work around by placing the schematic, selecting it, and saving as a new unlinked schematic. There used to be a bug where if people imported a schematic with a Steam ID, [they were unable to delete the schematic](https://steamcommunity.com/app/1127400/discussions/0/3145133021635675833/) because the delete button was replaced with a link to the workshop. This was fixed by stripping the Steam ID from all imported schematics. The same may need to be done for map importing. username_3: the only way around this is to start a new game, export that save, then import it as a map. all the waves need to be redone, but overall it is really frustrating not being able to unlink it
TekNoLogic/Quecho
342125998
Title: DamnBacon Question: username_0: 1x Quecho\services\inbound_comms.lua:29: attempt to call global 'RegisterAddonMessagePrefix' (a nil value) Quecho\externals\events.lua:79: in function <Quecho\externals\events.lua:64> Quecho\externals\events.lua:107: in function <Quecho\externals\events.lua:106> Locals: nil 1x Quecho\externals\events.lua:41: Attempt to register unknown event "_QUEST_ACCEPTED" [C]: in function `RegisterEvent' Quecho\externals\events.lua:41: in function `RegisterCallback' Quecho\services\progess_bar_quests.lua:57: in main chunk Locals: (*temporary) = <unnamed> { 0 = <userdata> } (*temporary) = "_QUEST_ACCEPTED" 1x Quecho\externals\events.lua:41: Attempt to register unknown event "_PARTY_EXPIRE" [C]: in function `RegisterEvent' Quecho\externals\events.lua:41: in function `RegisterCallback' Quecho\services\party_tracker.lua:65: in main chunk Locals: (*temporary) = <unnamed> { 0 = <userdata> } (*temporary) = "_PARTY_EXPIRE" 1x Quecho\externals\events.lua:41: Attempt to register unknown event "_PARTY_ABANDON" [C]: in function `RegisterEvent' Quecho\externals\events.lua:41: in function `RegisterCallback' Quecho\services\party_printout.lua:17: in main chunk Locals: (*temporary) = <unnamed> { 0 = <userdata> } (*temporary) = "_PARTY_ABANDON" 1x Quecho\externals\events.lua:41: Attempt to register unknown event "_QUEST_ABANDONED" [C]: in function `RegisterEvent' Quecho\externals\events.lua:41: in function `RegisterCallback' Quecho\services\outbound_comms.lua:25: in main chunk Locals: (*temporary) = <unnamed> { 0 = <userdata> } (*temporary) = "_QUEST_ABANDONED" 1x Quecho\externals\events.lua:41: Attempt to register unknown event "_PARTY_PROGRESS" [C]: in function `RegisterEvent' Quecho\externals\events.lua:41: in function `RegisterCallback' ...faceQuecho\services\objective_expiration.lua:31: in main chunk Locals: (*temporary) = <unnamed> { 0 = <userdata> } (*temporary) = "_PARTY_PROGRESS" 3x Quecho\externals\events.lua:41: Attempt to register unknown event "_THIS_ADDON_LOADED" [C]: in function `RegisterEvent' Quecho\externals\events.lua:41: in function `RegisterCallback' Quecho\services\inbound_comms.lua:36: in main chunk Locals: (*temporary) = <unnamed> { 0 = <userdata> } (*temporary) = "_THIS_ADDON_LOADED"
auth0/docs
232653533
Title: vue tutorial incorrect snippet path Question: username_0: (AUTH-4060) [Vue tutorial](https://auth0.com/docs/quickstart/spa/vuejs/01-login#add-a-callback-component) contain snippet with incorrect path. ![image](https://cloud.githubusercontent.com/assets/1114365/26647255/64d183de-4647-11e7-9eca-cfe99e98d81e.png)<issue_closed> Status: Issue closed
sw19-tug/SPbPU2019
517584672
Title: FourRow-1 Two Player Mode Question: username_0: As two players we want to play the "four-in-a-row" game in a browser from a single PC. **Acceptance Criteria:** Given we are two players. When we play the game. Then we play on the same field And we can use the controls in turns to make a move.<issue_closed> Status: Issue closed
OpenNeuroOrg/openneuro
503639044
Title: Public dataset not available on S3 Question: username_0: A user reported a public dataset is not available on S3. The result is empty folders in the downloaded folder. Appears this dataset does not have a GitHub remote as well Dataset - [ds001832](https://openneuro.org/datasets/ds001832/versions/1.0.0) Answers: username_1: Perhaps we should contact the author and ask them to update with mandatory metadata (in particular authors) and make a new release. username_0: I can do that - having it in the public bucket AFAIK is independent of this (the process was different in April) username_1: True. But I see the missing DOI, so I suspect some of the publication processes got interrupted. If we have a valid release, it may restart all of those. username_0: That's true - I know at that time our DOI process had a few issues. I've reached out to the owner to proceed with this potential solution and see what happens username_0: generated a new snapshot - https://openneuro.org/datasets/ds001832/versions/1.0.1 . reached out to the user trying to download to confirm if this has fixed this issue username_2: ds001832 is fully available now. Status: Issue closed
certbot/certbot
332021252
Title: --manual --preferred-challenges=dns certonly cannot seem to validate multiple domains Question: username_0: ## My operating system is (include version): Ubuntu 16.04 ## I installed Certbot with (certbot-auto, OS package manager, pip, etc): From github. ## I ran this command and it produced this output: $ sudo ./certbot-auto -d first - d second -d third -d fourth --manual --preferred-challenges=dns certonly Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator manual, Installer None Obtaining a new certificate Performing the following challenges: dns-01 challenge for first dns-01 challenge for second dns-01 challenge for third dns-01 challenge for fourth ------------------------------------------------------------------------------- NOTE: The IP of this machine will be publicly logged as having requested this certificate. If you're running certbot in manual mode on a machine that is not your server, please ensure you're okay with that. Are you OK with your IP being logged? ------------------------------------------------------------------------------- (Y)es/(N)o: yes ------------------------------------------------------------------------------- Please deploy a DNS TXT record under the name _acme-challenge.first with the following value: <snip> Before continuing, verify the record is deployed. (repeat three times) Press Enter to Continue Waiting for verification... Cleaning up challenges Failed authorization procedure. first (dns-01): urn:acme:error:dns :: DNS problem: NXDOMAIN looking up TXT for _acme-challenge.first, (repeat) IMPORTANT NOTES: - The following errors were reported by the server: Domain: first Type: None Detail: DNS problem: NXDOMAIN looking up TXT for _acme-challenge.first (repeat) ## Certbot's behavior differed from what I expected because: One out of four deployed TXT records can't be found by certbot, but dig finds all four. The primary DNS contains all four records, the secondary DNSes refer to the primary. Certbot randomly fails all but one of the challenges. Status: Issue closed Answers: username_1: Certbot isn't the one that checks your DNS records. Let's Encrypt's CA software does this. Certbot is just the client that tells you how to set them up. I recommend posting at https://community.letsencrypt.org. There's a large community of people there who will be able to help you. username_0: Ok, fair enough on who does what. It's not clear when you just use the software, so thanks for the pointer! However, a _community_ page isn't really helpful for reporting bugs. So I think I'll skip that, and not use Let's Encrypt.
backdrop/backdrop-issues
186119891
Title: Normalize CSS should address FF's use of text decoration on `abbr` tag Question: username_0: Firefox seems to be using: `text-decoration: 1px dotted;` (or something similar), no other browser does, should be normalized with Backdrop's core CSS. Answers: username_0: We might consider just updating to the latest version: https://github.com/necolas/normalize.css/blob/master/normalize.css username_1: ...I tried replacing our 3.0.1 with 4.0.0 and then the latest 5.0.0, but unfortunately none of them solved the issue. Must be a Firefox bug, but still would be a good idea to update normalize.css and keep it updated with feature versions. username_1: ...and let me upload some screenshots so that everyone understands what we are talking about @username_0 (because I pm'ed you on Gitter). username_1: This is what the "required" asterisks currently look like on our forms in Chrome: ![](https://files.gitter.im/username_1/7gaa/backdrop-required_asterisk_on_forms-chrome.png) ...and this is how they look in Firefox: ![](https://files.gitter.im/username_1/fMvS/backdrop-required_asterisk_on_forms-firefox.png) ...here's how they look respectively with the changes from https://github.com/backdrop/backdrop/pull/1629 applied: ![](https://files.gitter.im/username_1/zsrs/backdrop-required_asterisk_on_forms-chrome-dots_removed.png) ![](https://files.gitter.im/username_1/7fdb/backdrop-required_asterisk_on_forms-firefox-dots_not_removed.png)
WilliamsPaleoLab/WilliamsPaleoLab.github.io
185286500
Title: Nav Bar font hard to read on non-home pages Question: username_0: @username_1 - you've probably noticed that the light font of the nav bar looks great on the home page, but is hard to read on the other pages. The nav bar does a neat trick of darkening as it scrolls down. Not sure of the best solution - I can think of a couple options. Can you tackle this one? Answers: username_1: I tinkered with colors and opacity. It looks clunky if the nav bar banner color extends over top of the photo. It also looks bad if theres a change in opacity. I like the cleanness of it as it is now, but you're right it's hard to read. I think we can come up with something that looks clean but can be read. It's not that simple though -- theres some messy CSS involved thanks to the template we're using. username_2: I actually have a hard time seeing the nav bar on the home page. @username_1 I guess you are working on this? Other than that, the site looks great so far. username_1: Yeah I'm on it. Busy week though -- you can play with it if you want! username_1: Are you happy with the new nav bar now @username_0 ? Should we close this issue? username_0: Thanks to @username_1 and @kdburke for wrangling with the NavBar! I like the new solution. Closing issue. Status: Issue closed
alexcrichton/AudioStreamer
102755819
Title: Can i get catch downloaded data progress Question: username_0: How can i track downloaded data progress (buffering progress indicator) like this image.![](https://cloud.githubusercontent.com/assets/4496393/5715879/e8b072f0-9b13-11e4-9a5a-15e37a717466.png) Status: Issue closed Answers: username_1: Added in 0f3e804. The reason I didn't add it immediately was because the streamer didn't actually account for already downloaded data so any reported download progress would be useless. It was a bigger task than I thought but everything should work much better now. The method is called `bufferProgress:` and works in exactly the same way as you would use `progress:`.
maizy/sightreading.training
775223202
Title: Use other build tool Question: username_0: - [ ] remove tup because of it's complicated setup - [ ] use any other popular_at_the_time_of_the_day js build tool - [ ] update some outdated dependencies - [ ] nodejs 15.x? - [ ] swc - [ ] ... - [ ] update docker build
SteNuf/Medikamentplanner
794577119
Title: Missing space, violation of HTML specification Question: username_0: https://github.com/username_1/Medikamentplanner/blob/4e982e74695509ae9a353958c5c9e5f359ca139e/mediplanertest.html#L12 Missing space between `"eingabe"` and `action` ==> violating HTML specification Answers: username_1: Work done Status: Issue closed
PKUDeleted/Holes
543795843
Title: #1073024 在这种困难的抉择下,本人思来想去,寝食难安。 郭沫若说过一句富有哲理的话,形成天才的决定因素应该是... (关注:2 评论:1) Question: username_0: 在这种困难的抉择下,本人思来想去,寝食难安。 郭沫若说过一句富有哲理的话,形成天才的决定因素应该是勤奋。这句话语虽然很短,但令我浮想联翩。 查尔斯·史考伯曾经说过,一个人几乎可以在任何他怀有无限热忱的事情上成功。 这启发了我 **2019-12-30 16:11:01 关注:2 评论:1** Answers: username_0: **[Alice]** 涅卡在不经意间这样说过,生命如同寓言,其价值不在与长短,而在与内容。这启发了我, 一般来讲,我们都必须务必慎重的考虑考虑。 北京大学因何而发生? 叔本华在不经意间这样说过,意志是一个强壮的盲人,倚靠在明眼的跛子肩上。这启发了我, 北京大学的发生,到底需要如何做到,不北京大学的发生,又会如何产生。 拿破仑·希尔在不经意间这样说过,不要等待,时机永远不会恰到好处 Status: Issue closed
PaddlePaddle/PaddleOCR
757633829
Title: Paddle OCR c++ 编译问题 Question: username_0: ``` CMakeFiles/ocr_system.dir/src/ocr_cls.cpp.o: In function `PaddleOCR::Classifier::LoadModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/hznu/ocr/PaddleOCR/deploy/cpp_infer/src/ocr_cls.cpp:86: undefined reference to `paddle::AnalysisConfig::SetModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' CMakeFiles/ocr_system.dir/src/ocr_det.cpp.o: In function `PaddleOCR::DBDetector::LoadModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/hznu/ocr/PaddleOCR/deploy/cpp_infer/src/ocr_det.cpp:21: undefined reference to `paddle::AnalysisConfig::SetModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' CMakeFiles/ocr_system.dir/src/ocr_rec.cpp.o: In function `PaddleOCR::CRNNRecognizer::LoadModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/hznu/ocr/PaddleOCR/deploy/cpp_infer/src/ocr_rec.cpp:124: undefined reference to `paddle::AnalysisConfig::SetModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' collect2: error: ld returned 1 exit status CMakeFiles/ocr_system.dir/build.make:358: recipe for target 'ocr_system' failed make[2]: *** [ocr_system] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/ocr_system.dir/all' failed make[1]: *** [CMakeFiles/ocr_system.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2 ``` Answers: username_1: 你用的哪个版本的预测库,像是找不到 AnalysisConfig username_0: ubuntu14.04_cpu_avx_mkl 1.84版本,我试过其他版本,也都不行 username_2: 要不试下2.0.0rc的预测库呢? Status: Issue closed username_0: 在执行 `sh tools/build.sh`时出错 ``` CMakeFiles/ocr_system.dir/src/ocr_cls.cpp.o: In function `PaddleOCR::Classifier::LoadModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/hznu/ocr/PaddleOCR/deploy/cpp_infer/src/ocr_cls.cpp:86: undefined reference to `paddle::AnalysisConfig::SetModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' CMakeFiles/ocr_system.dir/src/ocr_det.cpp.o: In function `PaddleOCR::DBDetector::LoadModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/hznu/ocr/PaddleOCR/deploy/cpp_infer/src/ocr_det.cpp:21: undefined reference to `paddle::AnalysisConfig::SetModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' CMakeFiles/ocr_system.dir/src/ocr_rec.cpp.o: In function `PaddleOCR::CRNNRecognizer::LoadModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /home/hznu/ocr/PaddleOCR/deploy/cpp_infer/src/ocr_rec.cpp:124: undefined reference to `paddle::AnalysisConfig::SetModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)' collect2: error: ld returned 1 exit status CMakeFiles/ocr_system.dir/build.make:358: recipe for target 'ocr_system' failed make[2]: *** [ocr_system] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/ocr_system.dir/all' failed make[1]: *** [CMakeFiles/ocr_system.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2 ``` Status: Issue closed username_3: 请问最后如何解决的呢 username_4: 同样的问题,请问如何解决 系统ubuntu 18.04 5 LTS username_5: 解决了没? username_6: 请问解决了吗? username_7: 您好,请问您的问题解决了吗?
anfangd/laravel-ddd-sample-for-beginners
575957930
Title: Create Repository related with User Domain Question: username_0: ```bash # Create Model and Migration Files php artisan make:model Database/Eloquent/UserDataModel --migration ``` ```bash # execute migration php artisan migrate # You can rollback migration if you want. php artisan migrate:rollback ``` Answers: username_0: ```bash # Create Model and Migration Files php artisan make:model Database/Eloquent/UserDataModel --migration ``` ```bash # execute migration php artisan migrate # You can rollback migration if you want. php artisan migrate:rollback ``` username_0: Laravelのクエリビルダ記法まとめ(QueryBuilder/DB Facade) https://www.ritolab.com/entry/93# username_0: [PHP] Fakerでランダムなフェイクデータを作成する - Qiita https://qiita.com/Sa2Knight/items/fb82be7551cc84764267 Status: Issue closed
taboca/FPTI-latinoware
183238348
Title: New Twitter with Images Question: username_0: Update HTML front-end to newer smooth; For debug JSON Twitter http://jsonviewer.stack.hu. Think in mechanism for JSON debugging. Status: Issue closed Answers: username_0: Update HTML front-end to newer smooth; For debug JSON Twitter http://jsonviewer.stack.hu. Think in mechanism for JSON debugging. Status: Issue closed username_0: End up making the support to images, but regressed, intentionally to new Twitter component with animation: * https://github.com/username_0/FPTI-latinoware/tree/master/static/twitter-1 * https://github.com/username_0/FPTI-latinoware/tree/master/static/destaques-flex
idyll-lang/idyll-studio
813073421
Title: Open from Github Question: username_0: Feature request: load a project from GitHub (or plain git). It doesn't need to be smart about pushing changes back to Git, can just clone / download the project from a git repository and then continue to function as it does currently.
numpy/numpy
24303595
Title: savetxt raises a misleading error if array's shape is not rectangular Question: username_0: Hi, ``` a = np.array([[1., 2.], [1., 2., 3.]] np.savetxt('/tmp/toto', a) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-9-4a2768c04067> in <module>() ----> 1 np.savetxt('/tmp/toto', a) /usr/lib/python3.3/site-packages/numpy/lib/npyio.py in savetxt(fname, X, fmt, delimiter, newline, header, footer, comments) 1061 else: 1062 for row in X: -> 1063 fh.write(asbytes(format % tuple(row) + newline)) 1064 if len(footer) > 0: 1065 footer = footer.replace('\n', '\n' + comments) TypeError: a float is required ``` The error is misleading because all elements are made of floats. I agree that the array type is object but it would be easier to debug if we can distinguish between an array with mixed types and an array uncorrectly shaped. It took me half an hour to figure out my mistake. I would be happy to hear your comments. Thanks. Answers: username_1: This should be closed I think. username_2: This now (1.12.1) gives the error `TypeError: Mismatch between array dtype ('object') and format specifier ('%.18e')`, which seems reasonable enough Status: Issue closed
hackiftekhar/IQKeyboardManager
450602502
Title: 解决了点击 done 导致黑屏问题 Question: username_0: 解决了点击 done 导致黑屏问题 原因: - (void)viewDidLoad { [super viewDidLoad]; [self.tfLogisNum becomeFirstResponder]; } 解决 - (void)viewDidLoad { [super viewDidLoad]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self.tfLogisNum becomeFirstResponder]; }); } ![1559274394612198 2019-05-31 11_48_22](https://user-images.githubusercontent.com/22389902/58680359-11e60400-839a-11e9-9dc2-0cd07c54c481.gif) Answers: username_0: https://www.jianshu.com/writer#/notebooks/11816854/notes/47883214/preview 我用了比较笨拙的方法解决了这个问题,可以看一下。 username_1: Please remove becomeFirstResponder from viewDidLoad Status: Issue closed
ChurchCRM/CRM
190188179
Title: login page not found Question: username_0: After Setup i was redirected to crm.boonvillechurch.com/login and I got this. _Not Found The requested URL /login was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request._ however if i manually type in crm.boonvillechurch.com/Login.php i get a login page, but I don't have a login yet? Did I configure something wrong? -I am setup in dreamhost on a subdomain and using chrome. Answers: username_1: what version of php are you running, you also need to have Mod-Rewite enabled Status: Issue closed username_1: no activity in a while... closing username_2: After setup script, where the entire list was 'green' including PHP7.0, I get the exact same issue as described above. I confirm the database tables have all be created. Installation details are: Ubuntu 16.04.2 LTS with Apache2 url: https://mail.liberdale.co.za/churchcrm/login Apache2 has the following rules, which can't be changed: 1) for port 80: ``` RewriteEngine on RewriteCond %{SERVER_NAME} =mail.liberdale.co.za RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent] ``` 2) for port 334 ``` ServerName mail.liberdale.co.za DocumentRoot /var/www/html ``` Include/Config.php file as follows: ``` $sRootPath = '/churchcrm'; $URL[0] = 'https://mail.liberdale.co.za/'; ``` .htaccess ``` RewriteEngine On # Some hosts may require you to use the `RewriteBase` directive. # If you need to use the `RewriteBase` directive, it should be the # absolute physical path to the directory that contains this htaccess file. # #RewriteBase /var/www/html/churchcrm/ RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] <IfModule mod_php5.c> php_value short_open_tag On </IfModule> ``` I pointed the browser to https://mail.liberdale.co.za/churchcrm/ and received the following message: Error message: The requested URL /churchcrm/login was not found on this server. I would appreciate some help. Everything I tried (including editing .htaccess and adding "RewriteBase /var/www/html/churchcrm/") did not work and reports too many redirects and lands on the http://mail.liberdale.co.za/churchcrm/". We are trying to evaluate the application on an existing server. Thanks again. username_3: After Setup i was redirected to crm.boonvillechurch.com/login and I got this. _Not Found The requested URL /login was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request._ however if i manually type in crm.boonvillechurch.com/Login.php i get a login page, but I don't have a login yet? Did I configure something wrong? -I am setup in dreamhost on a subdomain and using chrome. username_4: @username_2 In your posted configuration, the line: "RewriteEngine Off" is a misconfiguration, and adding it to your system will not change any system behavior. Why "RewriteEngine Off" is allowed by Apache is that, if you include multiple "RewriteRule" parameters in your configuration, then instead of commenting them all, you can explicitly using “RewriteEngine Off” to disable all "RewriteRule". More importantly, the default value of “RewriteEngine" is already an "off", so adding “RewriteEngine Off" is quite unnecessary and may cause confusion to users. Since herein there is no "RewriteRule", deleting “RewriteEngine Off” would be ideal. Related Apache source code snippet: ``` run_rewritemap_programs(server_rec *s , apr_pool_t *p){ if (conf->state == ENGINE_DISABLED) { // usage of "RewriteEngine" return APR_SUCCESS; // early return rewritemap_program(...); // usage of "RewriteRule" } ```
ReactiveX/RxSwift
230115256
Title: framework not found RxCocoa for architecture x86_64 in Xcode8.3.2 Question: username_0: **Short description of the issue**: I upgrade my Xcode to 8.3.2. I run my app which works well. When I run my unit test. I got the error "framework not found RxCocoa for architecture x86_64" **RxSwift/RxCocoa/RxBlocking/RxTest version/commit** source 'https://github.com/CocoaPods/Specs.git' platform :ios, '10.0' use_frameworks! target 'NewAPP' do pod 'Alamofire', '~> 4.4' pod 'SnapKit', '~> 3.2.0' pod 'SwiftyJSON' pod 'RxSwift', '~> 3.0' pod 'RxCocoa', '~> 3.0' pod 'RxOptional' pod 'Moya/RxSwift' pod 'Moya-ModelMapper/RxSwift', '~> 4.1.0' pod 'Then', '~> 2.1' pod 'RxDataSources', '~> 1.0' pod 'ObjectMapper' pod 'RxAlamofire/RxCocoa' end target 'NewAPPTests' do pod 'RxBlocking', '~> 3.0' pod 'RxTest', '~> 3.0' end **Platform/Environment** iOS **Xcode version**: ``` Xcode 8.3.2 ``` **Installation method**: CocoaPods **I have multiple versions of Xcode installed**: no Status: Issue closed Answers: username_1: Hi @username_0 , This is what I get when I try to use your Podfile. ``` [05/20 15:35:51] username_1  Krunoslavs-MacBook-Pro  ~  Projects  RxIntegrations  Share  NewAPP  master  26✔  2✎  ❓  $  pod update Update all pods Updating local specs repositories Analyzing dependencies [!] Unable to satisfy the following requirements: - `Moya-ModelMapper/RxSwift (> 4.1.0)` required by `Podfile` None of your spec sources contain a spec satisfying the dependency: `Moya-ModelMapper/RxSwift (> 4.1.0)`. You have either: * mistyped the name or version. * not added the source repo that hosts the Podspec to your Podfile. Note: as of CocoaPods 1.0, `pod repo update` does not happen on `pod install` by default. ``` You were also missing ``` pod 'RxSwift', '> 3.0' pod 'RxCocoa', '> 3.0' ``` inside `target 'NewAPPTests' do` This is the Podfile that works for me. ``` source 'https://github.com/CocoaPods/Specs.git' platform :ios, '10.0' use_frameworks! target 'NewAPP' do pod 'Alamofire', '> 4.4' pod 'SwiftyJSON' pod 'RxSwift', '> 3.0' pod 'RxCocoa', '> 3.0' pod 'RxOptional' pod 'Moya/RxSwift' pod 'Then', '> 2.1' pod 'RxDataSources', '~> 1.0' pod 'ObjectMapper' pod 'RxAlamofire/RxCocoa' end target 'NewAPPTests' do pod 'RxSwift', '> 3.0' pod 'RxCocoa', '> 3.0' pod 'RxBlocking', '> 3.0' pod 'RxTest', '> 3.0' end ``` This doesn't look like RxSwift issue to me.
Chris-Johnston/Grappel
322136172
Title: Main menu controller navigation Question: username_0: Allow the player to navigate the main menu with the controller if one is plugged in. D-pad would probably be best for this, maybe using the axis-to-button wrapper to provide discrete up/down/left/right presses. **Estimated: 45** **Min: 25** **Max: 75**
pywinauto/pywinauto
606875646
Title: encrypted_keys Question: username_0: app = application.Application() app.connect(path=r"C:\Program Files \notepad.exe") encrpt = app.top_window() encrpt = app.top_window().type_keys("this_test") # here,is it any possible to use (this_test) as encrypted so that no one can see my this if it was my #password Answers: username_1: Hey @username_0, I don't think the question is directly related to pywinauto. Generally, you shouldn't keep any passwords/auth. tokens in your scripts. It's considered a security hole. The simplest alternative would be to prompt the user to enter the password when you start your script. Another, option is to read it from a file, but it poses exactly the same risk, unless you start creating a special user with very limited permissions and only the one be able to access the password file . You can see a lot of discussions about passwords in the scripts all over the Internet. One of argument is if you really don't care about keeping your "secret" in the script, why should you care on trying to hide it or even have it at all? username_0: Hi username_1, I am trying to automate a routine procedure that includes typing password and username everything when we log on and i do it very often.So,is there any way to encrypt the password. if i prompt the user, it no longer the automated stuff. Any how , What i want is that i will create a function which return char i will use that function here .Is that possible? username_1: Not pywinauto related. Not sure can help you with that. Status: Issue closed
manan025/DS-Algo-Zone
1013309096
Title: Vertical Traversal of Binary tree Question: username_0: ## 🚀 Feature We will be printing a binary tree in vertical order. Input: ![image](https://user-images.githubusercontent.com/35271444/135622621-e83ed3fb-ba3b-4278-93d7-d1743d7d0132.png) **Output**: 4,2,1,5,3,6 **Explanation**: As we can see there are 5 vertical lines which can pass through the tree - 1st vertical line - 4 2nd vertical line - 2 3rd vertical line - 1,5 4th vertical line - 3 5th vertical line - 6 ### Have you read the Contribution Guidelines? Yes ## Assignees (Do not make changes in this section until asked to do so) C - C# - C++ - Go - Java - Javascript - Kotlin - Python - Answers: username_0: @username_1 I would like to take up this problem in JAVA username_1: @username_0 Java assigned username_2: @username_1 Please assign python to me . username_1: @username_2 - Python assigned username_3: Hey, please assign C++ to me username_1: @username_3 - assigned cpp username_4: @username_1 I would like to contribute in Kotlin username_1: @username_4 - Kt -assigned
hyperledger/iroha
920530511
Title: [BUG] Configuration inconsitancy! Move common blockchain parameters to genesis.block Question: username_0: Let's consider `config.sample` ``` { "block_store_path" : "/tmp/block_store/", "torii_port" : 50051, "internal_port" : 10001, "pg_opt" : "host=localhost port=5432 user=postgres password=<PASSWORD>", "max_proposal_size" : 10, "proposal_delay" : 5000, "vote_delay" : 5000, "mst_enable" : false, "mst_expiration_time" : 1440, "max_rounds_delay": 3000, "stale_stream_max_rounds": 2, "metrics": "127.0.0.1:8080" } ``` Some configuration parameters `proposal_delay` `max_proposal_size` `mst_expiration_time` `mst_enable` and others **MUST be the same** in all over the Iroha network. Inconsistency (unequal) between parameters may (should, will) lead Iroha network (some nodes) to undefined behavior and/or to node(s) freeze. ## One of possible solutions * make these parameters a part of blockchain and store them in genesis block. * provide additional commands to edit configuration parameters ## The other solution always synchronize parameters that must always be equal, mark neighbor node INVALID if it has inconsistent parameters and do not negotiate blocks with "invalid" node. ## Suggest your solution pls if you have ideas --------- Kind ping @lebdron @username_2 @iceseer @LiraLemur ------- @username_1, could you please test and give logs from inconsistent Iroha network, i.e. where `max_proposal_size` is different Answers: username_1: I'll try to do this on two-noded test network tomorrow in the morning. username_2: yes, they should be part of genesis block. I would create new commands and permissions that update configuration parameters username_1: Here are results https://gist.github.com/username_1/6c8d82898f48803b90f27453058eabc5 for 2 nodes with `"max_proposal_size" : 20,` vs `"max_proposal_size" : 10,` it is version iroha 1.2.0. Scenarios: 1. Running 2 nodes with different `"max_proposal_size"` then shutting down - everything looks OK 2. Running 2 nodes with different `"max_proposal_size"` then sending 2 transactions and shutting down - everything looks OK 3. I'll now try to send 21 fast transactions (without waiting for response) then update the post and the gist username_1: @username_0 I've created tests. It looks like it works OK when I send transactions to node which accepts less `max_proposal_size`, I'll try to do the opposite and update logs. username_1: @username_0 I've performed tests again on version iroha 1.2.1: two-noded network: https://gist.github.com/username_1/7bc47e05dd82f83783f7bceee300d8b6 it looks like it works and it allows sometimes it is `"max_proposal_size"` from one node, but next time `"max_proposal_size"` from another node. In the attached gist are: 1. config1.sample 2. config2.sample 3. genesis.block 4. logs from node 1 5. logs from node 2 6. source code in python 7. output of the code username_0: @username_1 thank you for response. I definitely know that some configurations leaded iroha network to halt. Let me check them out from history. username_1: I've also checked another parameter on three nodes: 1. Node1: `"proposal_delay" : 1000,` and `"max_proposal_size" : 10,` 2. Node2: `"proposal_delay" : 3000,` and `"max_proposal_size" : 20,` 3. Node3: `"proposal_delay" : 5000,` and `"max_proposal_size" : 30,` and it is also working, this time 8 blocks: 0000000000000002(10txs) 0000000000000003(20txs) 0000000000000004(10txs) 0000000000000005(30txs) 0000000000000006(10txs) 0000000000000007(10txs) 0000000000000008(10txs) 0000000000000009(1tx)
videojs/video.js
254585396
Title: How can I detect the video is stuck and load fail? Question: username_0: ## Description Briefly describe the issue. Include a [reduced test case](https://css-tricks.com/reduced-test-cases/), we have a [starter template](http://jsbin.com/axedog/edit?html,output) on JSBin you can use. ## Steps to reproduce Explain in detail the exact steps necessary to reproduce the issue. 1. 2. 3. ## Results ### Expected Please describe what you expected to see. ### Actual Please describe what actually happened. ### Error output If there are any errors at all, please include them here. ## Additional Information Please include any additional information necessary here. Including the following: ### versions #### videojs what version of videojs does this occur with? #### browsers what browser are affected? #### OSes what platforms (operating systems and devices) are affected? ### plugins are any videojs plugins being used on the page? If so, please list them below. Answers: username_1: 你们的说的话,我听不懂 Status: Issue closed username_2: If the video failed to load completely, you should get an `error` event with one of these types: http://docs.videojs.com/MediaError.html#.errorTypes If it's just taking a while to load you could get `stalled` event. Hope that helps. username_3: @username_2 is there any event listener for video buffering? For example, in slow internet, video pauses and loads, I want a listener for such events. username_4: @username_3 maybe what you are looking for is `waiting` and `canplay` event? username_1: Are you foreigner? ------------------&nbsp;原始邮件&nbsp;------------------ username_3: Will it catch when the video is stuck because of say slow internet.
UQdeco2800/minesim
120078525
Title: Import Weather System From Farm Question: username_0: As the particle renderer's primary use in the Farm sim was to render weather, it seemed natural to include it as a design feature. However, I don't think the entire weather system's forecasting model needs to be included as we simply want to cycle through different weather patterns. I don't think letting the player know about the weather forecast on a week-by-week basis is core to the game, but if people want to, they can discuss it below.
ContinuumIO/anaconda-issues
310401617
Title: Navigator Error Question: username_0: ## Main error Application <b>spyder</b> launch may have produced errors. ## Traceback ``` [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 ``` ## System information ``` platform: osx-64 version: 1.6.2 conda: 4.3.30 qt: 5.6.2 language: en python: 2.7.13 os: Darwin;16.7.0;Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64;x86_64;i386 pyqt: 5.6.0 ``` Answers: username_1: Closing as duplicate of #1778 --- Please remember to update to the latest version of Navigator to include the latest fixes. Open a terminal (on Linux or Mac) or the Anaconda Command Prompt (on windows) and type: ``` $ conda update conda $ conda update anaconda-navigator $ conda update navigator-updater ``` --- **See Issue #1778 for more information on how to fix this.** Status: Issue closed
theproductiveprogrammer/luminate
694414636
Title: memo with payment Question: username_0: Many deposits require a "memo" field with a payment, but this wallet does not seem to support memo fields. Answers: username_1: Added memo support. Use `--memo 'Your Memo Text'` to add memo's to supported transactions. For example: ``` ./luminate activate --from activeAccount --amt 2 --memo 'Activating now' inactiveAccount ``` Thanks for the suggestions @username_0 ! 👍 Status: Issue closed
phillipadsmith/daily
138237742
Title: Daily log for Thursday, Mar 3, 2016 Question: username_0: ID: 2016-03-03T09:04:00-08:00 ## Log - [ ] Meditate - [ ] Two sun salutations - [ ] Exercise - [ ] FL - [ ] 90-minutes of creative work - [ ] Follow-up/check-in with a friend ## Details Meditate: Exercise: 90-minutes of creative work: Follow-up/check-in with a friend:<issue_closed> Status: Issue closed
Direwolf20-MC/BuildingGadgets
370112481
Title: [Feature request] Auto conversion for chisel types. Question: username_0: The building gadgets are fantastic, but when building with different chiseled types of materials, swapping types can be frustrating. It would be great if building gadgets had some sort of chisel support whereby if you had a chiseled block selected and the base block (or another chiseled type of the same block) in your inventory it auto converted to the type selected by the gadget. Answers: username_1: Problem is, that this would bypass normal usage of the chisel, as you would no longer use up it's durability. Though in case of a chisel this is not much of a loss - cause of the used materials one has practically endless chisels... username_0: That was my thinking, there could be an option for a chisel to be in the inventory and have it lose durability, or even increased energy cost when converting chiseled blocks, it just seems like a really cool feature to have though, and one none of the other building tools really offer. username_2: Theoretically, it wouldn't be hard to implement with chisel's api would it? username_3: Decided against implementing this :) Status: Issue closed
github-vet/rangeloop-pointer-findings
774944482
Title: k8snetworkplumbingwg/k8s-net-attach-def-controller: vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go; 4 LoC Question: username_0: [Click here to see the code in its original context.](https://github.com/k8snetworkplumbingwg/k8s-net-attach-def-controller/blob/3fa64d1d690952d17f1e0995279305fb01670905/vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go#L1469-L1472) <details> <summary>Click here to show the 4 line(s) of Go which triggered the analyzer.</summary> ```go for i, pod := range newPodList(int32(len(tc.restartCounts)), v1.PodRunning, job) { pod.Status.ContainerStatuses = []v1.ContainerStatus{{RestartCount: tc.restartCounts[i]}} podIndexer.Add(&pod) } ``` </details> <details> <summary>Click here to show extra information the analyzer produced.</summary> ``` The following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine: digraph G { "(ServeHTTP, 2)" -> {"(tryUpgrade, 2)";} "(tryUpgrade, 2)" -> {} "(newClientTransport, 6)" -> {} "(Get, 2)" -> {"(Get, 3)";"(Accept, 2)";} "(Has, 1)" -> {"(Get, 1)";} "(Update, 2)" -> {"(Get, 2)";} "(ConnectWithRedirects, 5)" -> {"(Dial, 1)";} "(SetTransportDefaults, 1)" -> {"(ConfigureTransport, 1)";} "(Delete, 1)" -> {"(Delete, 2)";"(Get, 2)";} "(setSupportedSubsystems, 1)" -> {"(Set, 2)";} "(Add, 2)" -> {"(Remove, 1)";"(newCurvePoint, 1)";"(Add, 3)";"(Mul, 3)";"(add, 2)";} "(NewSSHTunnelList, 4)" -> {} "(Run, 2)" -> {"(run, 2)";"(List, 1)";"(Get, 0)";"(Write, 1)";} "(ConfigureTransport, 1)" -> {"(configureTransport, 1)";} "(Search, 2)" -> {"(List, 1)";} "(addConnIfNeeded, 3)" -> {} "(updateTransport, 5)" -> {} "(NewServerConn, 2)" -> {"(serverHandshake, 1)";} "(NewCertSigner, 2)" -> {"(New, 1)";} "(configureTransport, 1)" -> {"(addConnIfNeeded, 3)";} "(Get, 0)" -> {"(Get, 3)";"(Get, 2)";} "(Mul, 3)" -> {"(Get, 0)";} "(Delete, 2)" -> {"(Accept, 2)";} "(add, 2)" -> {"(Get, 0)";} "(Write, 1)" -> {"(Set, 2)";} "(CreateAndInitKubelet, 30)" -> {"(NewMainKubelet, 30)";} "(NewClientConn, 3)" -> {"(clientHandshake, 2)";} "(Add, 1)" -> {"(NewCertSigner, 2)";"(add, 1)";"(Has, 1)";"(get, 1)";"(delete, 1)";"(Add, 2)";"(insertCert, 4)";} "(add, 1)" -> {"(New, 1)";"(Update, 1)";"(Get, 1)";} "(Get, 1)" -> {"(Get, 2)";"(Search, 2)";} "(clientHandshake, 2)" -> {"(newClientTransport, 6)";} "(TryDial, 1)" -> {"(TryDialWithAddr, 2)";} "(Get, 3)" -> {"(Copy, 2)";"(Do, 3)";"(RoundTrip, 1)";} "(Add, 3)" -> {"(Get, 0)";} "(awaitOpenSlotForRequest, 1)" -> {} "(Dial, 1)" -> {"(dial, 2)";"(TryDial, 1)";} "(Start, 1)" -> {"(Run, 1)";} "(run, 2)" -> {"(UpdateTransport, 4)";"(RunKubelet, 4)";} "(newCurvePoint, 1)" -> {"(Get, 0)";} [Truncated] "(run, 1)" -> {"(Set, 2)";} "(newServerTransport, 4)" -> {} "(RunKubelet, 4)" -> {"(CreateAndInitKubelet, 30)";"(startKubelet, 5)";} "(TryDialWithAddr, 2)" -> {"(NewClientConn, 3)";} "(insertCert, 4)" -> {"(New, 1)";} "(startControllers, 5)" -> {} "(Update, 1)" -> {"(Get, 2)";"(Update, 2)";"(setSupportedSubsystems, 1)";} "(delete, 1)" -> {"(Delete, 1)";} "(Do, 3)" -> {} "(serverHandshake, 1)" -> {"(newServerTransport, 4)";} } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 3fa64d1d690952d17f1e0995279305fb01670905<issue_closed> Status: Issue closed
tomayac/local-reverse-geocoder
835979068
Title: Crash when replacing yesterday's geodata (Extract.writer.error [Error: ENOENT: no such file or directory, lstat '/tmp/geonames/cities/cities1000.txt']) Question: username_0: ### Reproduction steps: 1. Run `local-reverse-geocoder`, let it create files with dates in name (eg. `cities/cities1000_2021-03-19.txt`) 2. Wait another day (or rename file to older date, eg. `cities/cities1000_2021-03-19.txt` → `cities/cities1000_2021-03-15.txt`) 3. Start `local-reverse-geocoder` again. ### What I expected: Geocoder updates its files. ### What actually happens Geocoder crashes with `Extract.writer.error [Error: ENOENT: no such file or directory, lstat '/tmp/geonames/cities/cities1000.txt']`. The cause is this: - we call [unzip.Extract](https://github.com/username_1/local-reverse-geocoder/blob/40f591edbfe138ff27251ebd2ed9304fdbb27ed2/index.js#L466) - Extract extracts files from downloaded zip into a folder and emits [close](https://github.com/username_1/local-reverse-geocoder/blob/40f591edbfe138ff27251ebd2ed9304fdbb27ed2/index.js#L467) - We [rename](https://github.com/username_1/local-reverse-geocoder/blob/40f591edbfe138ff27251ebd2ed9304fdbb27ed2/index.js#L468) and [unlink](https://github.com/username_1/local-reverse-geocoder/blob/40f591edbfe138ff27251ebd2ed9304fdbb27ed2/index.js#L477) - Extract uses `fstream` module, and it calls `stat` on extracted file *after calling close* - see [code](https://github.com/npm/fstream/blob/42354590e23bb514eb5c869eea64406be2947c6c/lib/writer.js#L269)! - Calling `stat` on now nonexistent file causes crash (error is emitted from fstream, and not handled by anyone). ### Fix ideas: 1. Fix the underlying stream libs ([fstream](https://github.com/npm/fstream) shouldn't [touch the extracted file](https://github.com/npm/fstream/blob/42354590e23bb514eb5c869eea64406be2947c6c/lib/writer.js#L269) after it passes `close` event) 2. Ignore `error` event from `unzip.Extract` ``` extractStream .on('error', function (err) { if (err.code === 'ENOENT') { // ignore - fstream writer runs stat/lstat after extract is finished (and .close sent), and it crashes, because we have already used and moved/deleted it's file } else { throw err } }) ``` 3. Use unzip to extract just a single file we need, in memory, without writing to disk ### Beware: All `_get*data` methods have this problem, it needs to be fixed everywhere. Answers: username_0: It is very curious that no one has reported this before, as it happens every time you don't clean up the data and restart local-reverse-geocoder. I may have discovered it by moving `dumpDirectory` to `/tmp/geonames` (which I believe should be a default place, the geo database shouldn't be just dumped inside node_modules). username_1: I think what you're asking for is making [`GEONAMES_DUMP`](https://github.com/username_1/local-reverse-geocoder/blob/40f591edbfe138ff27251ebd2ed9304fdbb27ed2/index.js#L95) user-configurable. I'm happy to merge a PR that adds this. username_0: @username_1 Hi Thomas, that was already configurable - see [here](https://github.com/username_1/local-reverse-geocoder/blob/40f591edbfe138ff27251ebd2ed9304fdbb27ed2/index.js#L658). I dived into the error with streams, but I think that the real problem is that one of `node-unzip-2`, `fstream`, or their interactions is wrongly written. However, there was no need to actually store the unzipped files on a disk, so I forked `local-reverse-geocoder` to https://github.com/username_0/local-reverse-geocoder and rewrote it to use `unzip-stream` to extract the wanted file from stream to disk. See https://github.com/username_0/local-reverse-geocoder/commit/6e9650062678610b20f400697c945d19bf0323b2 username_1: …I wonder why you went for forking this. I am and always was happy to merge PRs. I'd be happy to add you to the [list of contributors](https://github.com/username_1/local-reverse-geocoder#contributors). username_0: @username_1 Please don't take it personally, I just needed it fixed quickly. I just created a pull request https://github.com/username_1/local-reverse-geocoder/pull/40. Status: Issue closed
mhng-feedback/mhng-mammals
490562349
Title: Monthly VertNet data use report for 2019-8, resource mhng_mammals Question: username_0: Your monthly VertNet data use report is ready! You can see the HTML rendered version of this report at: http://tools-usagestats.vertnet-portal.appspot.com/reports/5a659248-1f70-11e3-b2c5-00145eb45e9a/201908/ Raw text and JSON-formatted versions of the report are also available for download from this link. A copy of the text version has also been uploaded to your GitHub repository under the "reports" folder at: https://github.com/mhng-feedback/mhng-mammals/tree/master/reports A full list of all available reports can be accessed from: http://tools-usagestats.vertnet-portal.appspot.com/reports/5a659248-1f70-11e3-b2c5-00145eb45e9a/ You can find more information on the reporting system, along with an explanation of each metric, at: http://www.vertnet.org/resources/usagereportingguide.html Please post any comments or questions to: http://www.vertnet.org/feedback/contact.html Thank you for being a part of VertNet.
jenkinsci/azure-keyvault-plugin
819219160
Title: Using multiple key vaults Question: username_0: <!-- Never report security issues on GitHub or other public channels (Gitter/Twitter/etc.), follow the instruction from [Jenkins Security](https://jenkins.io/security/) to report it on [Jenkins Jira](https://issues.jenkins-ci.org) --> ### Your checklist for this issue - [x] Jenkins version 2.281 - [x] Plugin version 2.2 - [x] OS CentOS 7 <!-- Put an `x` into the [ ] to show you have filled the information below Describe your issue below --> ### Description It would be great to be able to connect multiple key vaults. Is this something that is already supported and what I missed in the documentation? Answers: username_1: It depends on what you want to do, In pipeline it's supported there's a key vault url override you can use. The credential provider currently just supports one vault username_0: @username_1 In my case, I have a lot of values ​​in three key vaults and getting these values ​​describing each in the pipeline is a very big problem. username_1: We have something fairly crazy here that allows it: https://github.com/hmcts/cnp-jenkins-library/blob/master/vars/withTeamSecrets.groovy https://github.com/hmcts/draft-store/blob/master/Jenkinsfile_CNP#L13-L29 Does that help? or are you after something else? username_0: Maybe this will help. Thank. Is there a chance that you will add the ability to use multiple vaults in the future? username_1: Possibly based on demand, how would you see it working? Are you looking for it with the credential provider? possibly with different credentials per vault? I think it would have to be namespaced then, something like `myteam-dev/my-secret` username_0: I think I'm not the only one with this issue. And yes, in my opinion the most logical solution would be with separate credentials for each vault. It is very bad from a security point of view to store keys for all environments in one vault. username_1: Via `withAzureKeyvault` you can access as many vaults as you want, but it requires nesting which isn't ideal. Are you wanting this to be easier to use in `withAzureKeyvault` or using the credential provider where you just go ` withCredentials([string(credentialsId: 'github-api-token', variable: 'GITHUB_API_TOKEN')]) {` ? username_0: For us the best solution is to connect via configuration-as-code plugin. If we could just connect a second key vault, as I've shown below, that would be great. All other methods do not suit us very much ``` unclassified: azureKeyVault: credentialID: "azure_credentials" keyVaultURL: https://some.vault.azure.net/ azureKeyVault: credentialID: "azure_credentials2" keyVaultURL: https://some2.vault.azure.net/ ``` username_1: Sure makes sense. FYI @username_3 similar to your AWS issue username_0: @username_1 Is there a chance that you will implement such a solution within a couple of months, or is there no chance? username_1: There's a chance but no plans right now, If someone else were to contribute it then I can spare the time to review, guide and test it. username_1: My suggestion without the 'namespace' feature support is to just prefix it with the vault name / account id in AWS and then a separator like a `/` username_2: It will be very useful for me also username_3: Does Azure have a notion of `/` separators within secret names to create a hierarchy? For example can you have things like - `environments/staging/api-key` - `environments/production/api-key` AWS does have this, and that's what complicates just adding the account ID as a prefix. The plugin won't know which bit of the combined name is an account ID and what's part of the hierarchy. username_1: No azure has separate ‘vaults’ for that
walkamongus/sssd
147902110
Title: Does not manage sssd plugin packages Question: username_0: For example does not install `sssd-ldap` or other libraries in such a way that adding or removing them restarts the service. Answers: username_1: Is there a specific plugin that is not installed as a dependency i should look into. At least on RHEL7, `sssd-ldap` comes along as a dependency of `sssd`: ``` [root@localhost ~]# yum deplist sssd Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.cogentco.com * epel: mirror.cogentco.com * extras: mirror.solarvps.com * updates: mirrors.centos.webair.com package: sssd.x86_64 1.13.0-40.el7_2.2 dependency: python-sssdconfig = 1.13.0-40.el7_2.2 provider: python-sssdconfig.noarch 1.13.0-40.el7_2.2 dependency: sssd-ad = 1.13.0-40.el7_2.2 provider: sssd-ad.x86_64 1.13.0-40.el7_2.2 dependency: sssd-common = 1.13.0-40.el7_2.2 provider: sssd-common.x86_64 1.13.0-40.el7_2.2 provider: sssd-common.i686 1.13.0-40.el7_2.2 dependency: sssd-common-pac = 1.13.0-40.el7_2.2 provider: sssd-common-pac.x86_64 1.13.0-40.el7_2.2 dependency: sssd-ipa = 1.13.0-40.el7_2.2 provider: sssd-ipa.x86_64 1.13.0-40.el7_2.2 dependency: sssd-krb5 = 1.13.0-40.el7_2.2 provider: sssd-krb5.x86_64 1.13.0-40.el7_2.2 dependency: sssd-ldap = 1.13.0-40.el7_2.2 provider: sssd-ldap.x86_64 1.13.0-40.el7_2.2 dependency: sssd-proxy = 1.13.0-40.el7_2.2 provider: sssd-proxy.x86_64 1.13.0-40.el7_2.2 ``` username_0: I'll have to look more closely, sssd-ldap did not seem to come in as a dependency, however it would be good that the module verifies their installation. I'll work on a PR, but no promises, puppet isn't my current primary $work at the moment. Status: Issue closed username_1: latest version 2.0.0 should make including any plugin packages easy via the `required_packages` parameter
uncharted-distil/distil
447817694
Title: Geo selection should clear by clicking close button on the selection rectangle Question: username_0: After you drag select something on the geo view and click the X button on the selection rectangle, I would expect the selection to be canceled but neither `X` button disappears nor the selection clears. Instead, it makes 1x1 selection.<issue_closed> Status: Issue closed
dotnet/AspNetCore.Docs
771058462
Title: IIS Hosting documentation has lost instructions for configuring data protection. Question: username_0: In the warning box on this page we have links which say - Creation of a registry hive for ASP.NET Core Data Protection - Configuration of the app pool's Access Control List (ACL) These links used to give instructions on how to configure data protection and the app pool. Now they lead to a generic page with no instructions, and seemingly no link to instructions either (https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/iis/?view=aspnetcore-5.0#data-protection for example). So I can't say "Well we documented this" any more. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: f92a99a8-6ffa-e5f9-b7ef-f13fa1e0f17b * Version Independent ID: 51273790-5178-875b-010e-15d7725da535 * Content: [Publish an ASP.NET Core app to IIS](https://docs.microsoft.com/en-us/aspnet/core/tutorials/publish-to-iis?view=aspnetcore-5.0&tabs=visual-studio#feedback) * Content Source: [aspnetcore/tutorials/publish-to-iis.md](https://github.com/dotnet/AspNetCore.Docs/blob/master/aspnetcore/tutorials/publish-to-iis.md) * Product: **aspnet-core** * Technology: **aspnetcore-tutorials** * GitHub Login: @Rick-Anderson * Microsoft Alias: **riande** Answers: username_1: @Rick-Anderson ... I'm sitting here on a Saturday morning at 5am after the 🐈 just woke me up (*Rotten* 😼!) and can't work on the Jeep for another four hours or so. The work splitting up the *BIG* IIS topic into smaller topics broke those links to ... * https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/iis/advanced?view=aspnetcore-5.0#data-protection * https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/iis/advanced?view=aspnetcore-5.0#application-pool-identity I'll patch the links up right now. Status: Issue closed
nuxt-community/pwa-module
492014671
Title: `og:image` meta not being populated by default Question: username_0: ### Version [v3.0.0-beta.18](https://github.com/pwa-module/releases/tag/v3.0.0-beta.18) ### Reproduction link [https://deploy-preview-4072--bootstrap-vue.netlify.com/](https://deploy-preview-4072--bootstrap-vue.netlify.com/) ### Steps to reproduce Use default settings for meta (or set `pwa.meta.ogImage` to `true`). Icons are successfully set as are `shortcut icon` and `apple-touch-icon`, except `og:image` is no longer generated in the `<head>` ### What is expected ? `og:image` meta tag is created in rendered output ### What is actually happening? no `og:image` meta tag is being rendered <!--cmty--><!--cmty_prevent_hook--> <div align="right"><sub><em>This bug report is available on <a href="https://cmty.app/nuxt">Nuxt</a> community (<a href="https://cmty.app/nuxt/pwa-module/issues/c169">#c169</a>)</em></sub></div> Answers: username_0: Closing, as I just realize that `ogHost` needs to be specified as well. Status: Issue closed
laravel-idea/plugin
670781747
Title: [Request] Code generation for Dusk Question: username_0: Hey guy, first of all thank you for this incredibly good plugin. I noticed that a code generation for Dusk tests would be really helpful. Many tests look like this: ```php /** * Test sth. * * @group 1 * @group 2 * * @return void * @throws Throwable */ public function testSomeFunction(): void{ $this->browse(static function(Browser $browser){ $browser->visitRoute('route') ->assert... ->assert... ... }); } ``` This code block could easily be generated automatically (maybe without the assert methods?) and save a lot of time when designing test cases. What do you think about this? :) Answers: username_1: Hi, Florian. I'm going to release a new version soon. After that, I want to implement a new feature - custom generations. It will be possible to create own code generation with own template. Little workaround for now: change template and paths for Create Feature or Create Unit Test generation and use it :) https://laravel-idea.com/docs/3.x/generation#configuration username_0: Hi @username_1 that's great news! I really looking forward to this feature! In the meantime, I'm not going to die from the little more typing ;) -Florian username_1: Shame on you )) Status: Issue closed
pbaity/rocketchat-dark-mode
562987411
Title: Room info not contrast Question: username_0: Room info not contrast ![image](https://user-images.githubusercontent.com/4023037/74213171-1abd0280-4ca9-11ea-90b9-6334d609394f.png) Answers: username_1: @username_0 Thanks for pointing this out. I'll work on this when I can unless you or someone else picks it up, but just to make sure I don't miss anything, please describe what exactly you see that needs to be fixed in the Room Info and Notification Preferences. See the bug report issue template for the types of details we're looking for. username_0: Hi! I understand. I'm not a developer or CSS master, so now I can only get div classes with contrast problems: rc-switch-double rc-switch-double__label disabled rc-switch-double__description rc-switch-double__label rc-switch-double__description Status: Issue closed
MicrosoftDocs/visualstudio-docs
321075245
Title: Visual Studio Installer Microsoft R Client very old Question: username_0: Hi there, I am trying to install a version of R. In particular, 3.4. However, the VS2017 installer dialog window only shows 3.3.2. This is a rather old version, how do I overcome this? I have tried installing the Microsoft R Client directly from Microsoft. But offline installation keeps failing. I am behind corporate firewall, so need to install either offline or through VS installer. Please advise. Answers: username_1: @TerryGLee or @username_2 do you know how @username_0 can install a newer version of R tools? username_2: @jflam Do you have suggestions? username_0: @username_2 yep, have tried following those direct links, however, they try and download more data from a server which the corporate firewall blocks. There are other instructions on how to install it in a offline fashion, but they also fail, because again, they try and download more data from a server, its like the offline mode doesn't activate. I have tried this with admin rights as well. Same story. Is this part of the log insightful? MSI (s) (10:CC) [13:46:32:408]: Windows Installer installed the product. Product Name: Microsoft R Client. Product Version: 3.4.3.0. Product Language: 1033. Manufacturer: Microsoft. Installation success or error status: 1603. VS installer still points to older versions. username_2: Thanks. Am investigating. username_0: Any news on this? username_2: Not yet...I'm still waiting to hear back from the engineering team. username_2: @username_0 Ultimately what needs to change here is that the VS installer should reference the newer version. Because that's a product and not a documentation issue, please file issue on https://github.com/Microsoft/rtvs/issues (and refer to this present doc issue). That way you'll be in the conversation directly with the engineering team. In the meantime, I'm closing this issue in the docs repo here. Thanks. Status: Issue closed username_0: Finally got this to work. 1. Copy the following files to your temporary directory. MLM_9.1.0.0_1033.cab SRS_9.3.0.0_1033.cab SRO_3.4.3.0_1033.cab 2. To access the temporary directory open windows explorer and enter the address %TEMP%. Now paste those files in there 3. Close any running visual studio 2017 process that may be open. 4. Now run the RCLientSetup.exe file. 5. After the installation successfully completes open Visual Studio 2017. 6. Enable the newly Microsoft Client as your R interpreter. (i.e. Microsoft R Client (3.4.3.0)) 7. You may experience strange UI errors in Visual Studio R Interactive Window. 8. Close Visual Studio 2017. 9. Go to C:\Users\[YOUR_USERNAME]\AppData\Local\Microsoft\VisualStudio 10. Now delete any 15.*** directories that may be in there. 11. Open Visual Studio 2017. username_2: Glad to hear it, and thanks for writing all the details! Issues here do appear n the docs page, so others will benefit from your sharing!
polkadot-js/apps
547559030
Title: Better filtering for the Accounts interface Question: username_0: If I have a lot of accounts in the Accounts tab, the UI becomes really hard to navigate. Tag filtering is a great help, but after certain point it doesn't help either — when I have more than a certain number of tags (about 15, I suppose?) additional tags are no longer shown in the dropdown; there's also no typeahead search for tags, and they seem to not always be sorted in any predictable way. Having filtering interface working more alike the Validators filter in the Nomination UI (but still with tagging support!) will be a great help for my usecase.<issue_closed> Status: Issue closed
json-schema-org/json-schema-vocabularies
572991606
Title: patternGroups and patternRequired Question: username_0: Originally added as wiki: https://github.com/json-schema/json-schema/wiki/patternGroups-and-patternRequired-(v5-proposal) ### Proposed keywords This proposal would introduce two new keywords: - `patternGroups` and `patternRequired` these keywords would **complement** the existing keyword: - `patternProperties` ### Purpose Currently, schemas can specify a minimum/maximum number of object properties, but they cannot place such constraints on particular groups of properties. This proposal basically allows us to specify the number of properties that must match a particular pattern. ### Values #### `patternGroups` The value of `patternGroups` would be an object. The keys of the object would be regular expressions (exactly like the existing `patternProperties`). The values inside `patternGroups` would be objects, containing zero or more of the following properties: - `minimum` - the minimum number of properties in the data that MUST match the corresponding pattern - `maximum` - the maximum number of properties in the data that MUST match the corresponding pattern - `schema` - the schema that matching properties must follow. `minimum`/`maximum` would be non-negative integers, and `schema` would be a schema. #### `patternRequired` The value of this keyword should be the array of patterns (to require at least one property matching the pattern) ### Behaviour #### `patternGroups` If the instance is an object, then for every entry in `patternGroups`: 1. the set of properties matching that pattern is collected 2. if `minimum` or `maximum` are specified in the `patternGroups` entry, then the size of this property set must be between these values (inclusive) 3. if `schema` is specified in the `patternGroups` entry, then for every property in the property set, the corresponding object member in the instance must follow that schema. #### `patternRequired` If the instance is an object, the data to be valid should have at least one property matching each pattern (the same property can match multiple patterns). ### Example `patternGroups` ``` json { "type": "object", "patternGroups": { "^[a-z]+$": { "minimum": 1, "schema": {"type": "string"} }, "^[0-9]+$": { "minimum": 1, "schema": {"type": "integer"} } } } ``` ### Example with `patternRequired` ``` json { "type": "object", "patternProperties": { "^[a-z]+$": {"type": "string"}, "^[0-9]+$": {"type": "integer"} }, "patternRequired": ["^[a-z]+$", "^[0-9]+$"] } ``` Both these schemas expresses the constraint that instance objects must have at least one alphabetic key, and at least one numeric-key. (This constraint is currently not possible to express). Additionally, the alphabetic keys must hold strings, and the numeric keys must hold integers. Valid: `{"abc": "foo", "123": 456}` Invalid: `{"abc": "foo", "def": "bar"}` For the simple case the syntax of `patternRequired` is the simpler and consistent with `properties`/`required`.
AnnLuschik/To-Do-List
690762060
Title: Вложенность Question: username_0: https://github.com/username_1/To-Do-List/blob/41d13bed0672596110ac0dd8210731f81fd30683/script.js#L225 Слишком явное определение вложенности, стоит немного структуре измениться и этот код перестанет работать, подумай над способом как достать нужный тебе элемент без цепочки прыжков по дочерним элементам Answers: username_1: Реализовано с помощью дополнительных классов
TheThingsNetwork/lorawan-stack
450844167
Title: Component documentation Question: username_0: **Umbrella issue** #### Summary Services and components need a high level documentation to enable knowledge transfert and make it easier to maintain, improve components. ... #### Why do we need this? Without documentation maintaining and collaboration is nearly impossible. If we want to enable the community to contribute to and use the stack we need an high level documentation about each component and service about the stack. ... #### What is already there? What do you see now? #395 All documentation is under `/docs/content` ... #### What is missing? What do you want to see? A `/docs/content/components` with high level description on all the services and components in the stack following this plan. 1. description / For what purpose 2. It's place in the architecture and/or role 3. What does it do. 4. Link annex resources (like API reference) ... #### Environment Hugo ... #### How do you propose to implement this? Each Codeowner write the documentation for the component he his the owner of. Under `/docs/content/components/<component>.md` ... #### Can you do this yourself and submit a Pull Request? Security /pkg/auth @username_2 @username_1 /pkg/crypto @username_2 @username_1 Shared components /pkg/component @username_2 @username_1 /pkg/types @username_2 @username_1 @rvolosatovs /pkg/webui @bafonins @username_3 Subsystems /pkg/applicationserver @username_2 /pkg/console @bafonins @username_3 /pkg/gatewayserver @username_2 /pkg/networkserver @username_2 @rvolosatovs /pkg/identityserver @username_1 /pkg/joinserver @username_2 @rvolosatovs ... Answers: username_1: I think this kind of (code) documentation belongs in godoc, and not in Hugo. We could consider pulling package docs/comments from go files into markdown, but IMO the godoc should be leading for documenting our go packages. username_0: Not aiming to make a technical doc but more of an introduction, description of the components, how they work with the other services and what they provide with some example. Maybe shorten the scope to the subsystems. username_2: There's a difference between component documentation and code documentation. This issue is about component documentation, see also the issue title and points 1-2. Code documentation should be godoc, which addresses a different audience, may have code examples and usage guidelines, etc. Since the stack is not intended to be used as a library, we don't have to spend time on that now. I do think, however, that the list of packages is confusing; there shouldn't be anything like that in our documentation. But I guess this is simply taken from `CODEOWNERS` to figure out who's most knowledgeable about the topic. username_1: Can we agree on the following list of tasks then? - [ ] `/docs/content/concepts/security` @username_2 @username_1 (both lorawan and ttn-lw-stack) - [ ] `/docs/content/components/_index` @username_2 @username_1 @bafonins @username_3 (backend core component and base webui structure) - [ ] `/docs/content/components/applicationserver` @username_2 - [ ] `/docs/content/components/console` @bafonins @username_3 - [ ] `/docs/content/components/gatewayserver` @username_2 - [ ] `/docs/content/components/gatewayconfigurationserver` @adriansmares - [ ] `/docs/content/components/networkserver` @username_2 @rvolosatovs - [ ] `/docs/content/components/identityserver` @username_1 - [ ] `/docs/content/components/joinserver` @username_2 @rvolosatovs username_2: LGTM username_1: No more discussion, @username_2 is working on the structure. username_1: - [ ] Components / Network Server (@roman) - [ ] Components / Application Server (@johan) - [ ] Components / Join Server (@roman) - [ ] Components / Console (@username_3) also need to fix the screenshot - [ ] Components / Crypto Server (@johan) - [ ] Components / Device Claiming Server (@johan) - [ ] Components / Gateway Configuration Server (@KrishnaIyer) - [ ] Reference / API (@username_1) - [ ] Reference / Configuration / Gateway Server Options (@iamBatman) - [ ] Reference / Configuration / Application Server Options (@johan) - [ ] Reference / Configuration / Console Options (@username_3) - [ ] Reference / Configuration / Gateway Configuration Server Options (@KrishnaIyer) Status: Issue closed