repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
DominoKit/domino-ui
560372153
Title: Introduce a getValue method for a DataTable cell Question: username_0: **Is your feature request related to a problem? Please describe.** Currently it isn't possible to the value of a cell renderer inside a DataTable **Describe the solution you'd like** Introduce a getValue method for a cell which a CellRenderer can implement Answers: username_1: Because the cell could contain any kind of element and could provide any kind of value, it would be hard to tie the cell with a specific data type and it will bring complexity to the implementation and the use of the datatable, so instead i am thinking of the following approach : Dirty record approach: ===================== in such approach the user can define a dirty record provider which will be used to create a new record when ever we ask for the edited value of the row: e.g ```java tableConfig.setDirtyRecordProvider(originalRecord -> //create a dirty record, default will return the same record); ``` then each cell can operate on that record by implementing a function, e.g `onDirtyRecordUpdate(OriginalRecord, DirtyRecord)` the default will return the dirtyRecord. now in your code you can update the dirty record from within the cell renderer . Status: Issue closed
sequelize/sequelize
1181188997
Title: $ sign in fn method causing issues Question: username_0: <!-- If you don't follow the issue template, your issue may be closed. Please note this is an issue tracker, not a support forum. For general questions, please use StackOverflow: https://stackoverflow.com/questions/tagged/sequelize.js --> ## Issue Creation Checklist - [x ] I have read the [contribution guidelines](https://github.com/sequelize/sequelize/blob/main/CONTRIBUTING.md) ## Bug Description I have a function that looks like this ``` User.findByUsername = async (username) => { let user = await User.findOne({ where: { username: sequelize.where( sequelize.fn('LOWER', sequelize.col('username')), '=', sequelize.fn('LOWER', username) ), }, }); return user; }; ``` if the username includes a `$` symbol the lookup always fails. I dug around in the source code and found if I changed Line 1611 in the query-generator.js file from: ``` return this.escape(typeof arg === "string" ? arg.replace("$", "$$$") : arg); ``` to: ``` return this.escape(typeof arg === "string" ? arg.replace("$", "$$") : arg); ``` everything works properly. I'm not familiar with this codebase so unsure of why "$$$" was originally put but it is breaking things for me. ### SSCCE <!-- We have a repository dedicated to make it easy for you to create an SSCCE. https://github.com/sequelize/sequelize-sscce Please consider using it, everyone wins! --> [**Here is the link to the SSCCE for this issue:** LINK-HERE <!-- add a link to the SSCCE --> ](https://github.com/sequelize/sequelize-sscce/compare/main... Answers: username_1: Likely related to https://github.com/sequelize/sequelize/issues/13817
flutter/flutter
295098268
Title: Why Localization in Flutter is so complicated? Question: username_0: Even I defined a map like this : static Map<String, Map<String, String>> _localizedValues = { 'en': { 'title': 'Grab Merchant', 'help': 'Help', }, 'ja': { 'title': 'タイトル', 'help': 'ヘルプ', }, }; String get title { return _localizedValues[locale.languageCode]['title']; } String get help { return _localizedValues[locale.languageCode]['help']; } I have to use 'MyLocalizations.of(context).title' to display the localizated string . but when I want to define a list like this : List<MenuItem> items = <MenuItem>[ new MenuItem('help', MyLocalizations.of(context).help), ]; where can I get the 'context' ? Answers: username_1: cc @username_2 Status: Issue closed username_2: I apologize for not having responded this. Believe it or not, a glitch in my github configuration caused notifications to cease for a while and I overlooked quite a few issues. FWIW (now): the localization's system complexity, particularly the `BuildContext` dependency you noted, is a result of the fact that app's aren't defined wrt a locale, widget subtrees are. Often they're the same thing, but in some cases part of an app will be within its own [Localizations](https://docs.flutter.io/flutter/widgets/Localizations-class.html). Status: Issue closed username_3: This is a major issue also for me. The BuildContext dependency is very problematic. I need translated strings for example in my "view model" classes which do not have access to BuildContext. If I have understood correctly the BuildContext is needed only for that app can change its language in real-time if user modifies device language settings. In my opinion that is a completely useless feature especially considering the problems it causes to my app architecture. First of all 99% of users never change the device language, they have already selected the correct language when setting up their device. The rest 1% change the device language maybe once during the device lifetime. And most probably they do not have my app running when they do that. And if they have it running, they could just restart the app to active the new language settings. So in my opinion dependency to BuildContext has been added completely unnecessarily and it is now a major issue. Dependecy to BuildContext should be replaced with "device language changed" callback and with an ability to restart the app. For my app I have now only two bad options. I have to either refactor my view models classes (ported from C#) heavily or write my own strings loading system. username_4: The build context is used so that different parts of the view can be configured with different languages. The `intl` package used for translation doesn't depend on anything from Flutter. You don't need to use any of that. It's just a generic solution that supports a broad range of requirements. username_3: I have followed the guide [Internationalizing Flutter Apps](https://flutter.io/tutorials/internationalization) without thinking it too much. I think the guide tells you to use `DemoLocalizations.of(BuildContext)` whenever you need to access localizations. After a closer look at the sample codes I noticed there is a helper method `DemoLocalizations.load(Locale)` which can be used to load localizations without `BuildContext`. So this solves the problem for my app. Thanks. username_5: If you are using IntelliJ (Android Studio), you can use the [Flutter i18n](https://github.com/long1eu/flutter_i18n) plugin. username_6: @username_3 can you give an example/gist of how you can get to the translated string without using any context? I think this will help many people, like me and [others](https://stackoverflow.com/questions/51803755/getting-buildcontext-in-flutter-for-localization) for which this is a major issue. username_3: @username_6 Here is a sample based on my current implementation: ``` class Strings { Strings(Locale locale) : _localeName = locale.toString(); final String _localeName; static Strings current; static Future<Strings> load(Locale locale) async { await initializeMessages(locale.toString()); final result = Strings(locale); current = result; return result; } static Strings of(BuildContext context) { return Localizations.of<Strings>(context, Strings); } String get title { return Intl.message( 'Hello World', name: 'title', desc: 'Title for the Demo application', ); } } Future<Null> main() async { final Locale myLocale = Locale(Platform.localeName); await Strings.load(myLocale); runApp(MyApplication()); } ``` Now you can reference a string as follows: `final title = Strings.current.title` username_6: @knex Thanks! Very clever! Not sure how you implemented initializeMessages, but this method seems to bypass the use of a LocalizationsDelegate which then makes it difficult to manage which locales are allowed and when to reload etc. But very useful method to have in the tool belt. username_7: Here's another option to do it via flutter i18n plugin for IntelliJ https://github.com/long1eu/flutter_i18n/pull/50 username_8: @username_3 could you, please give us a hint how you implemented initializeMessages ? I'm also struggling with context-problem. Thank you in advance! username_7: If you build the latest flutter-i18n plugin from source you’d be able to use S.current.YourMessage everywhere in the project after importing import 'generated/i18n.dart'; Is that what you are asking? Alex > username_8: @username_7 yeah something like that. Will definitely check the plugin today, thanks! Context-less solution should by provided and documented by Flutter Team... username_7: Well, honestly in the end we had to share the context statically/globally anyway in order to be able to implement global error handling/messages. So then you could also use it for the internalisation. One way or another you end up sharing the context (or the messages) via a static property of a class… If you can find a more elegant solution - I’d be glad to hear about it Alex > username_9: If you follow the `Intl` example, it is already depending on a static property (i.e. `Intl.defaultLocale`) from the start. So why do you need the ceremony of `DemoLocalizations.of(context).[..]` when the locale of Intl is global anyway? If I understand this correctlty, it makes not difference to call the methods that wrap `Intl.message` directly (such as `DemoLocalizations.title()`). And those methods can be static themselves, and you can even put them wherever you want (in the widget where they are used). You only still need a LocalizationsDelegate (subtype) to listen to the load method to load the correct locale. Am I seeing this correctly? username_10: A new approach of localization in Flutter Wrote two plug-ins to simplify the process of localization 1. Prepare all strings in Airtable 2. Run lang_table to generate json files 3. Run gen_lang to generate i18n.dart and message_all.dart 4. Use i18n.dart for localization in coding https://medium.com/@kingwu/a-new-approach-of-localization-in-flutter-e18bfb2b14ab username_11: can you guide me how to use this plugin in android studio version 3.4.0 because i got this error in my studio. ![image](https://user-images.githubusercontent.com/13308845/60179357-27383c00-983b-11e9-958b-b6bbd42cd7a6.png) username_7: Yes it looks like it needs to be ported to the newer IntelliJ but the original authors seem to be quite passive... any takers? Sent from my iPad >
ericniebler/range-v3
419644167
Title: Adding support for sizes/strides in slice. Question: username_0: Is there an existing way to compose this behavior with existing range views? If not, would this proposed functionality warrant a new range view type called `gslice`? Thanks! Answers: username_1: I would be satisfied just with a stride in slice
z586/test
132634694
Title: 1 Question: username_0: [Task](https://dev.trackduck.com/project/56ba3b2f5723722123e7264a/issue/56ba3eea5723722123e7267a) with ***Low*** priority was created by ***<NAME>***: ``` 1 ``` [![Issue screenshot](https://devtrackduck.s3.amazonaws.com/crop/56ba3b2f5723722123e7264a-1455046378207-258-278.jpg)](https://dev.trackduck.com/preview/awsi/56ba3b2f5723722123e7264a-1455046378207-258-278.jpg) Environment: ***Chrome 48*** on ***Mac OS*** with ***1440x761*** screen size. Check it in your [website](http://www.tut.by/?tdtask=56ba3eea5723722123e7267a) or get [more details...](https://dev.trackduck.com/project/56ba3b2f5723722123e7264a/issue/56ba3eea5723722123e7267a)
crowdbotics-users/audvice-native-audio-recorder
409760945
Title: Sound is not played over the main speakers on iOS Question: username_0: When starting the player on iOS, the sound is currently played over the phone speakers at the top and not over the speakers at the bottom. Thus, the volume is very low. The sound should be played over the main speakers at the bottom. Answers: username_0: Solved by #15 Status: Issue closed
react-hook-form/react-hook-form
643902427
Title: `watch` / `getValues` don't return default value when it's defined in `Controller` Question: username_0: **Describe the bug** When default values are defined using `useForm` then `watch` and `getValues` return correctly these values before any change on the inputs. However, when default values are define locally for each `Controller`, the default value of each input isn't returned, and value is `undefined`. **To Reproduce** Steps to reproduce the behavior: 1. Create a `Controller` with `defaultValue` prop 2. Watch the value of this input 3. See that there is a value in the input but the string returned by `watch` is undefined **Codesandbox link (Required)** https://codesandbox.io/s/react-hook-form-watch-z0bf0 **Expected behavior** `watch` (and `useWatch`) and `getValues` returned the value specified in the `defaultValue` prop of `Controller`. **Desktop (please complete the following information):** - OS: MacOS (15.15.5) - Browser: Chrome (83) Answers: username_1: <img width="923" alt="Screen Shot 2020-06-24 at 8 50 51 am" src="https://user-images.githubusercontent.com/10513364/85474307-d0a14180-b5f7-11ea-9195-25bdacf764d0.png"> please make sure it's either. Status: Issue closed username_0: Ok thanks @username_1 :) It's a shame because it would be great to be able to fully declare a field (with its default value) anytime and anywhere :)
OnizukaLab/ConferenceProceedings
370741552
Title: Multimodal Grounding for Language Processing Question: username_0: ## 一言でいうと NLP におけるマルチモーダル研究のサーベイ論文.マルチモーダルの分類からマルチモーダル研究の最新動向まで幅広く議論されている. ### 論文リンク [Multimodal Grounding for Language Processing](http://aclweb.org/anthology/C18-1197) ### 著者/所属機関 - <NAME> (Language Technology Lab, University of Duisburg-Essen) - <NAME> (Ubiquitous Knowledge Processing Lab (UKP) and Research Training Group AIPHES Department of Computer Science, Technische Universitat Darmstadt ) - <NAME> (Ubiquitous Knowledge Processing Lab (UKP) and Research Training Group AIPHES Department of Computer Science, Technische Universitat Darmstadt ) ### 会議日付 COLING 2018 ## 概要 NLP におけるマルチモーダル研究を情報フローの観点から議論する. ## 新規性・差分 - マルチモーダル処理を分類 - Cross-modal transfer - Cross-modal interpretation - Joint mulmodal processing - マルチモーダル研究の動向を議論 - 意味表現獲得タスクや行動記述タスクなどで様々なマルチモーダル研究がなされている - マルチモーダル研究の今後の課題についても言及 ## コメント サーベイ論文は流し読みができないのでつらかった.<issue_closed> Status: Issue closed
reduxjs/redux-toolkit
817134667
Title: Action creator created with `createAction` leads to error when used in `extraReducers` Question: username_0: I am currently trying to dispatch an action which is consumed by multiple slices and has been created with `createAction`. My setup looks like this: ### Top level slice ```ts // dataSlice.ts export type DataState = { solutions: SolutionState annotations: AnnotationsState videoCodes: VideoCodesState videoCodePrototypes: VideoCodePrototypesState cuts: CutsState } export default combineReducers({ solutions: SolutionSlice.reducer, annotations: annotationsSlice.reducer, videoCodes: videoCodesSlice.reducer, videoCodePrototypes: videoCodePrototypesSlice.reducer, cuts: cuttingSlice.reducer, }) // The critical action export const initData = createAction<DataState>('data/init') export const actions = { solutions: SolutionSlice.actions, annotations: annotationsSlice.actions, videoCodes: videoCodesSlice.actions, videoCodePrototypes: videoCodePrototypesSlice.actions, cuts: cuttingSlice.actions, } ``` I want to initialize the state of all sub slice with a single action, therefore I created the `initData()` action creator and use it inside the subslices. ### Example subslice ```ts // annotationSlice.ts export const annotationsSlice = createSlice({ name: 'annotations', initialState, reducers: { ... }, extraReducers: (builder) => { builder.addCase(initData, (_, action) => { return action.payload.annotations }) }, }) ``` Everything compiles fine, but as soon as I open my app inside the browser, I get the following console error: ``` Uncaught TypeError: can't access property "type", typeOrActionCreator is undefined addCase Redux extraReducers AnnotationsSlice.ts:74 Redux 2 executeReducerBuilderCallback createSlice ts AnnotationsSlice.ts:24 Webpack 16 ``` Any idea what could cause this? Status: Issue closed Answers: username_0: Ok, i think i've found the issue. Apparently we had some kind of circular dependecy going on. Moving the action creator to a new file fixed the issue.
rails/rails
297867298
Title: ActtiveRecord 5.1.5 query cache not working Question: username_0: ### Steps to reproduce Given the following MiniTest which implements a Sinatra app using ActiveRecord: ```ruby require 'minitest' require 'minitest/autorun' require 'active_record' require 'sinatra/base' require 'rack/test' class CachedQueryTest < Minitest::Test include Rack::Test::Methods class ApplicationRecord < ActiveRecord::Base self.abstract_class = true end class Article < ApplicationRecord end class ActiveRecordTestApp < Sinatra::Application post '/cached_request' do Article.cache do # Do two queries (second should cache.) Article.count Article.count end end end def app ActiveRecordTestApp end def setup ActiveRecord::Base.establish_connection( adapter: 'sqlite3', database: ':memory:') migrate_db end def migrate_db Article.exists? rescue ActiveRecord::StatementInvalid ActiveRecord::Schema.define(version: 20180101000000) do create_table 'articles', force: :cascade do |t| t.string 'title' t.datetime 'created_at', null: false t.datetime 'updated_at', null: false end end end def test_cached_tag # Make sure Article table exists migrate_db ActiveRecord::Base.logger = Logger.new(STDOUT) # Do query with cached query [Truncated] ``` # Running: Instead it looks like: ``` -- create_table("articles", {:force=>:cascade}) -> 0.0024s D, [2018-02-16T12:41:06.446559 #19878] DEBUG -- : (0.2ms) SELECT COUNT(*) FROM "articles" D, [2018-02-16T12:41:06.447383 #19878] DEBUG -- : (0.2ms) SELECT COUNT(*) FROM "articles" . ``` ### System configuration **Rails version**: ActiveRecord `5.1.5` only. **Ruby version**: 2.3.4 The expected behavior is produced on version `5.1.4` correctly. Is this a regression in version `5.1.5`? Related to https://github.com/rails/rails/pull/29609 perhaps? Answers: username_1: Your point is correct. Because of the PR pointed out, if do not have `ActiveRecord::Base.configurations`, cache is no longer used. @username_2 Is this intentional? username_2: Kind of. But I think if it is already connected we should also use the query cache. Mind to open a PR to check if it is connected or have configuration? Status: Issue closed
lufe089/CORINFront-end
380335477
Title: NEW: agregar campo identification al modelo Client Question: username_0: Actualizar también tabla en draw.io Modelo en django Serializer en django View en django ( si aplica) Front-end para pedir el dato. Status: Issue closed Answers: username_1: Se creo el campo en el modelo, se actualizó el servicio y la semilla (Para el Backend). Se actualizaron los modals para manejar el campo identificación, en la creación y edición. Se organizó la vista para muestre el campo y también la exportación del cliente.
kubernetes/kubernetes
467924969
Title: lookup ip from node name by NDS in kubelet node status setter take a long time, then controller-manager mark this node as `NotReady` Question: username_0: **What happened**: kubelet periodically report node status including nodeIP to apiserver, it tries to get nodeIP in below order (you can find this logical at [here](https://github.com/kubernetes/kubernetes/blob/7285829b8306a7cc1a2dc0da943afcfe45c06579/pkg/kubelet/nodestatus/setters.go#L153-L192)): 1) Use nodeIP if set 2) If the user has specified an IP to HostnameOverride, use it 3) Lookup the IP from node name by DNS and use the first valid IPv4 address. If the node does not have a valid IPv4 address, use the first valid IPv6 address. 4) Try to get the IP from the network interface used as default gateway At step 3, there are some corner cases will make getting nodeIP hang for a long time, even exceed the `node-monitor-grace-period` flag setting specified in controller-manager, then controller-manager will mark this node as `NotReady` state. these corner cases include but are not limited to below: 1). dns server has some network failure 2). node's `/etc/resolv.conf` has a long timeout value setting here are some log from kubelet, as you can see, setting node status at position 0 (which is setting nodeIP actually) take more than 20s ```shell I0714 21:45:42.450816 27532 kubelet_node_status.go:503] Setting node status at position 0 ... delete some unrelated messg..... I0714 21:46:02.452322 27532 interface.go:384] Looking for default routes with IPv4 addresses I0714 21:46:02.452326 27532 interface.go:389] Default route transits interface "ens33" I0714 21:46:02.452629 27532 interface.go:196] Interface ens33 is up I0714 21:46:02.452679 27532 interface.go:244] Interface "ens33" has 2 addresses :[172.16.152.137/24 fe80::e4b:cf31:c0f6:4d7f/64]. I0714 21:46:02.452687 27532 interface.go:211] Checking addr 172.16.152.137/24. I0714 21:46:02.452690 27532 interface.go:218] IP found 172.16.152.137 I0714 21:46:02.452693 27532 interface.go:250] Found valid IPv4 address 172.16.152.137 for interface "ens33". I0714 21:46:02.452696 27532 interface.go:395] Found active IP 172.16.152.137 I0714 21:46:02.452714 27532 kubelet_node_status.go:503] Setting node status at position 1 I0714 21:46:02.452735 27532 kubelet_node_status.go:503] Setting node status at position 2 I0714 21:46:02.453883 27532 kubelet_node_status.go:503] Setting node status at position 3 I0714 21:46:02.453886 27532 kubelet_node_status.go:503] Setting node status at position 4 I0714 21:46:02.453890 27532 kubelet_node_status.go:503] Setting node status at position 5 I0714 21:46:02.453891 27532 kubelet_node_status.go:503] Setting node status at position 6 I0714 21:46:02.453923 27532 kubelet_node_status.go:503] Setting node status at position 7 ``` **What you expected to happen**: add a context to invoke `net.LookupIP`and use this context timeout a DNS lookup, so that we can rollback to step 4 and kubelet will report it's status timely, then the node will stay healthy state. **How to reproduce it (as minimally and precisely as possible)**: 1. set up a cluster 2. increase kubelet's logging level to 6 to debug, then add flag `--node-status-update-frequency=5s` 3. change controller-manager's `node-monitor-grace-period` flag to a smaller value intentionally to make it easily reproduce, such as `--node-monitor-grace-period=10s` 4. change one node's /etc/resolv.conf `nameserver` filed to a wrong ip to simulate DNS server failuer 5. dig into kubelet's log, you'll find log `Setting node status at position 0` takes a long time **Anything else we need to know?**: dns lookup timeout value is different depending on the host setting and language default value, golong's default setting is `timeout * attempt count * length of DNS server list`, timeout is 5s, attempt count is 2 by default, length of DNS server list is depending on your `/etc/reslov.conf` **Environment**: - Kubernetes version (use `kubectl version`): - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`): - Install tools: - Network plugin and version (if this is a network-related bug): - Others: not actually a bug, but indeed cause some failure, so mark as bug for now. /kind bug /sig kubelet /assign Answers: username_1: Taking a look at your pr now! This is a _very_ detailed issue, thanks for doing such a great job filing :)
microsoftgraph/microsoft-graph-toolkit
954251745
Title: New component: mgt-picker Question: username_0: <!-- ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> <!-- Please make sure you are posting an issue pertaining to the Microsoft Graph Toolkit. --> <!-- Add a title for your feature proposal. Please be short and descriptive --> # Proposal: Create a generic picker component ## Description <!-- A clear and concise description of what the feature request is. Please include if your feature request is related to a problem. --> Build a generic picker that supports a predefined list of Graph entities (ex: Files, People, Messages, Channels, Sites, etc.). The developer should be able to choose which entities to include in their app. We should also explore making the picker easily extensible for new entities. ## Rationale <!-- Describe why the feature should be added for all developers and users --> We have been getting multiple feature requests to build a picker for "x". This will allow us to resolve many of these requests with one solution. ## Preferred Solution <!-- A clear and concise description of what you want to happen Provide examples of how the feature would be used in code and what the outcome would be. For components, make sure to include attributes, properties, methods, and/or events and what they would do. Pseudocode is fine. Include any Design mockups or example screenshots you might have --> TBD. Needs spec. ## Additional Context <!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to add, use case, Stack Overflow links, forum links, screenshots, OS if applicable, etc. --> Answers: username_1: Will the UX experience be the same for all entities (ie. search as you type + drop down with matching items + input box with selected items inline; kind of what we have for the people picker today)? If so, the picker could be as generic as letting people specify the resource to query to get a list of items, and then specify a template to render one item. That way, we wouldn't restrict folks to a specific set of entities and they could pick anything they want in a consistent way.
algorithm006-class01/algorithm006-class01
570293217
Title: 【017 Week 02】学习总结 Question: username_0: 这周的主要学习内容是哈希表,树,还有递归 哈希表在平时的工作中使用比较多,数据查找几乎是最快的(O(1)),不过hashcode会重复,从而发生数据碰撞,又JAVA源码得知,如果碰撞发生大于8时候,碰撞的数据以红黑树的形式存放。平时运用中,主要是需要频繁查询的列表,可以使用哈希表这种数据结构。 由于是非科班的,树结构在之前几乎没多接触过(mysql的B+树),在数据量很大的情况下,O(n)都是很大的量,而平衡树的查询只需要O(logn),类似于二分查找法的拓展,之后接触了前序,中序,后序的遍历,这个老师讲的很好,之前做开学考试时候有百度过,讲得比较晦涩,老师讲的比较易懂,某些课后习题很简单就搞定了。树结构适合上下层节点有关联的数据,比如数据库索引,通过索引,一层一层,可以在叶子节点获取数据。老师如果可以讲一下平衡树的生成会更好,比如翻转什么的。 从树的遍历中,我们引申到了递归,递归真的是我最怕的题目,感觉代码要各种人肉遍历,十分麻烦,通过习题,基本上能很好使用递归了,最后几道题还学会了回溯法,用来解排列组合问题比较不错。
vasturiano/react-force-graph
1170968047
Title: ForceGraph2D - Safari issues in non M1 MAC chips Question: username_0: **Describe the bug** We're using only 2D to represent middle size graphs (3000 nodes - 6000 edges approx) and it's not working properly in Safari v15.3 in older Macs. **To Reproduce** Cannot share this one. Holds sensible data. **Expected behavior** Slow render - Browser closing/crashing. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: Monterey - Browser: Safari - Version: 14, 15.1, 15.2, 15.3 **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Additional context** We're only importing the ForceGraph2D library. There could be something missing if we just import it? Should we import the whole library? Answers: username_1: @username_0 thanks for reaching out. I've tested [this example](https://username_1.github.io/react-force-graph/example/text-nodes/index-2d.html) on an Intel chip Macbook, on Safari 15.4. I do not notice any specific issue. Can you reproduce your issue using that example? If not it would be useful if you make a reduced example on https://codepen.io where the issue is visible. Also, importing just `react-force-graph-2d` should be totally sufficient if you don't need any of the other modes. username_0: @username_1 Thanks for the fast reply! I'm building one in codesandbox and will share in a couple of min. username_2: Any update on this? @username_1
broadinstitute/gatk
306020600
Title: improve error message during build when ToolProvider is unavailable Question: username_0: build.gradle finds the tool provider with the following line: ``` ```final javadocJDKFiles = files(((URLClassLoader) ToolProvider.getSystemToolClassLoader()).getURLs()) ``` ToolPrivider.getSystemToolClassLoader() returns null on jre and certain other java installations. This causes a confusing null pointer exceptions. We should have a better error message when this happens. Answers: username_1: I was just adding the prerequisite checker for the large resources so I'll fix this while I'm at it. Status: Issue closed username_1: Fixed via #4530.
sp614x/optifine
728303044
Title: [Bug Report] grass_block Allows Transparency with Shaders Question: username_0: ## Description of Issue Solid blocks usually don't allow transparency with shaders. This doesn't seem to apply to grass_block though. How it looks: ![image](https://user-images.githubusercontent.com/60381935/97020818-50adc880-1552-11eb-94a3-6cfe765e787a.png) How it should look: ![image](https://user-images.githubusercontent.com/60381935/97020839-59060380-1552-11eb-9c9b-b5485d77f3d9.png) ## Steps to Reproduce 1. Make a `gbuffers_terrain.fsh` shader with something like this: `color *= vec4(vec3(1), 0);` 2. All solid blocks are still solid, exept for the grass_block texture ## OptiFine Version `OptiFine HD U G4 pre2` ## Installation Method Standalone Installer ## Log Files/Crash Reports [latest.log](https://github.com/sp614x/optifine/files/5430304/latest.log) ## F3 Debug Screenshot ![image](https://user-images.githubusercontent.com/60381935/97021726-7edfd800-1553-11eb-9721-f3f32371ec39.png) ## Additional Information Tested with [username_0 Shaders](https://www.planetminecraft.com/mod/luracasmus-s-shaders/) Answers: username_1: It allways has transperency (It needs it for the overlay)
EBIvariation/eva-pipeline
378770052
Title: Abort when duplicate sample names are found Question: username_0: The pipeline must abort when an input like the latter is provided, by modifying the class [VcfHeaderReader](https://github.com/EBIvariation/eva-pipeline/blob/develop/src/main/java/uk/ac/ebi/eva/pipeline/io/readers/VcfHeaderReader.java#L103-L110).<issue_closed> Status: Issue closed
pivotal-sprout/sprout-exemplar
35119665
Title: I would like to see a HOWTO.md Question: username_0: How about a HOWTO that tells me how to use this project? Including how to test drive recipes and providers. Answers: username_1: Rough idea, maybe @hiremaga or @wendorf have other thoughts: --- ## How to use ## First Steps * Copy the contents of `sprout-exemplar` a new directory for your team `sprout-<our-team-name>` * Initialize this git repo, maybe add all files so that changes can be tracked? * Replace all occurrences of `-exemplar` with `-<our-team-name>` * Change any filenames containing `exemplar` to `<our-team-name>` * Test your changes by running `bundle && bundle exec rake` * Fix any rubocop, foodcritic, or spec errors At this point you should have a working sprout-wrap setup with no interesting recipes. ## Second Steps * Remove `sprout-<our-team-name>::path` recipe (located at `sprout-<our-team-name>/recipes/path.rb`) * Customize `soloistrc` * Add cookbooks * [homebrew](https://github.com/chef-cookbooks/homebrew) * configure formulas, taps, and casks under the `node_attributes:` key in soloistrc * [chruby](https://github.com/pivotal-sprout/sprout-chruby) * More resources at [pivotal-sprout](https://github.com/pivotal-sprout) ```
pnpm/pnpm
1088952952
Title: fix:  ERR_PNPM_CANNOT_RESOLVE_WORKSPACE_PROTOCOL for scoped peerDependency Question: username_0: <!-- If this issue affects many people in a company/big team, create a post for your company in the following discussion: https://github.com/pnpm/pnpm/discussions/3787 and link the issue in your post. This will help us prioritize issues that affect more people. --> ### pnpm version: 6.24.3 ### Code to reproduce the issue: <!-- If there was a fatal error also include a gist of your node_modules/.pnpm-debug.log file. --> 1. Clone https://github.com/username_0/node-configs 2. `pnpm install` 3. cd packages/eslint-config 4. pnpm publish 5.  ERR_PNPM_CANNOT_RESOLVE_WORKSPACE_PROTOCOL  Cannot resolve workspace protocol of dependency "@username_0/prettier-config" because this dependency is not installed. Try running "pnpm install" ```sh $ git clone [email protected]:username_0/node-configs.git $ pnpm install $ cd packages/eslint-config $ pnpm publish  ERR_PNPM_CANNOT_RESOLVE_WORKSPACE_PROTOCOL  Cannot resolve workspace protocol of dependency "@username_0/prettier-config" because this dependency is not installed. Try running "pnpm install" ``` ```jsonc { "0 debug pnpm:scope": { "selected": 6, "total": 6, "workspacePrefix": "/Users/ocean/main/node-configs" }, "1 error pnpm": { "code": "ERR_PNPM_CANNOT_RESOLVE_WORKSPACE_PROTOCOL", "err": { "name": "pnpm", "message": "Cannot resolve workspace protocol of dependency \"@username_0/prettier-config\" because this dependency is not installed. Try running \"pnpm install\".", "code": "ERR_PNPM_CANNOT_RESOLVE_WORKSPACE_PROTOCOL", "stack": "pnpm: Cannot resolve workspace protocol of dependency \"@username_0/prettier-config\" because this dependency is not installed. Try running \"pnpm install\".\n at makePublishDependency (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178311:17)\n at async /opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178299:9\n at async Promise.all (index 0)\n at async makePublishDependencies (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178297:60)\n at async makePublishManifest (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178277:22)\n at async packPkg (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178900:35)\n at async Object.handler (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178873:7)\n at async handler (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178740:27)\n at async default_1 (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178450:35)\n at async Object.handler [as publish] (/opt/homebrew/Cellar/pnpm/6.24.2/libexec/lib/node_modules/pnpm/dist/pnpm.cjs:178705:9)" } } } ``` ### Expected behavior: It should just work. ### Actual behavior: It does not. ### Additional information: - `node -v` prints: v16.13.1 - Windows, macOS, or Linux?: macOS
mesonbuild/meson
1063789261
Title: Issue with Intel MPI detection Question: username_0: Hello, It feels like there is a bug in Meson when trying to detect Intel MPI. The implemented strategy is to run the MPI compiler wrapper with the `-show` option which is fine but the way the wrapper is chosen seems wrong: https://github.com/mesonbuild/meson/blob/471c05d57dec7a6f59dcf9c80c9ed963d052320d/mesonbuild/dependencies/mpi.py#L67-L72 Meson tries to get the wrapper path or name from the `I_MPI_CC`, `I_MPI_CXX` and `I_MPI_FORT` environment values which makes no sense since according to Intel MPI documentation those should be used to "set the path/name of the underlying compiler to be used [by the wrapper]" (https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-windows/top/environment-variable-reference/compilation-environment-variables.html). That means that if `I_MPI_CC`, `I_MPI_CXX` and `I_MPI_FORT` are correctly set, Meson will try the `-show` option on the actual compiler instead of the wrapper and thus will fail to detect Intel MPI. I would suggest using the same environment variable as the non-Intel MPI code path. I can provide a PR. Best regards, Rémi
runelite/runelite
1012362429
Title: Option to sort Fairy Rings in travel log alphabetically by destination name instead of by ring code Question: username_0: Difficult to find destinations im looking for sometimes as I know the name of where I want to go but I scroll looking for it alphabetically by name and end up having to comb through looking for it instead Suggestion would be to add a checkbox to the Fairy Rings plugin to sort by destination name instead of ring code.
swcarpentry-ja/i18n
961546362
Title: Translation: Library Carpentry OpenRefine Episode 11 Question: username_0: Translation: Library Carpentry OpenRefine Episode 11 ----- Original lesson: https://librarycarpentry.org/lc-open-refine/11-using-arrays-transformations/index.html * Please [see the README for more information about editing PO files](https://github.com/swcarpentry-ja/i18n/blob/ja/README_en.md#about-po-files) * Please assign yourself (or add a comment) to the issue **before** you start translating, so other people know that you are translating. * Once you have finished translating, send a pull request and reference this issue. ----- * PO fileの編集については[READMEをご参照ください](https://github.com/swcarpentry-ja/i18n#about-po-files)。 * あなたが翻訳し始めたと分かるように、翻訳を**始める前**にこのイシューを自分に割り振って(assign)、あるいはコメントを書いて下さい。 * 翻訳が終わったら、プルリクエストを開いて、このイシューを参照・リンクして下さい。 https://github.com/swcarpentry-ja/i18n/blob/bb59a567c061b1ac798065838a9cd85b518b90d2/po/lc-open-refine.ja.po#L1659-L1762
mlpack/mlpack
381972042
Title: Unable to build mlpack on mac: 'armadillo' file not found Question: username_0: Hello, I have installed and built mlpack, boost and armadillo from source following all the steps and using cmake. However, when I try to run a sample code on Xcode, I get buildtime errors from "'armadillo' file not found" next to `#include <armadillo>` in `arma_extend.hpp`. I have also included the search paths in my xcode project too. I noticed that all the other includes work when they're including header files, and `#include <armadillo/Mat_bones.hpp` works as well for instance, but I cannot include armadillo as a folder. I'm new to using installed libraries on c++, thank you for your help Answers: username_1: Hi there, there's not much information here to help debug. One thing to point out here though is that `#include <armadillo>` does not include a directory---actually `armadillo` is a header file (but it does not have a `.h` or `.hpp` suffix). username_2: Also, make sure to link against armadillo/mlpack, for example I use something like: ``clang++ test.cpp -larmadillo -lmlpack`` username_0: Hi! So I ended up installing Ubuntu on a VM and built mlpack from there, and everything seems to work there now. I gave up installing on mac OS but thank you for your replies, cheers! Status: Issue closed username_1: Hey there, glad you got it worked out! If you want to debug OS X sometime we are happy to help, but if not I'll close this issue for now. :+1:
Nutty69/DecDecBingo
389089442
Title: Board doesn't work in Private / Incognito mode Question: username_0: When I first coded this I didn't know about localstorage so cards would reset if swapped out of memory, tab was closed, et cetera. Now I've added localstorage, but there are issues in Private / Incognito where the card can sometimes get reset. Doing anything server-side is not an option (limitations from hosting provider). So if anyone has any insight into how to make this work better in Private / Incognito mode, that would be appreciated. In an ideal world it would work the same in Private / Incognito mode as in "public" mode (close browser, come back, board is right where it was beforehand -- same numbers, same marked squares). Is this possible?
asciinema/asciinema-player
544421236
Title: Permissions issue Question: username_0: This is the first time I use format v2. I ran into a problem I didn't have with v1. I had to add the execution permission for asciinema to work, so I suggest mentioning in the documentation that you may need the execution permission when you embed in a website. Answers: username_0: This is not an issue, but a suggestion, so I'm closing this now. Status: Issue closed
aws/aws-cdk
758697480
Title: (aws-cdk/core): BundlingDockerImage.fromAsset not finding assets Question: username_0: <!-- description of the bug: --> When using `lambda.Code.fromAsset` and `cdk.BundlingDockerImage.fromAsset` together, synth fails to find anything in `\asset-output` ### Reproduction Steps 1. Create a Dockerfile that compiles and copies files to an `/asset-output` directory ``` FROM python:3.7-slim COPY . /asset-input COPY . /asset-output WORKDIR /asset-input RUN apt-get update && apt-get -y install curl make automake gcc g++ subversion python3-dev RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3 - ENV PATH "/root/.poetry/bin:/opt/venv/bin:${PATH}" RUN poetry export -f requirements.txt -o requirements.txt RUN pip3 install -r requirements.txt -t /asset-output ``` 2. Use the following snippet when creating the lambda using cdk: ``` code: lambda.Code.fromAsset(PROJECT_DIR, { bundling: { image: cdk.BundlingDockerImage.fromAsset(PROJECT_DIR) } }), ``` 3. Run `tsc && cdk synth -o cdk.out` ### What did you expect to happen? Docker should find the compiled assets in `/asset-output` ### What actually happened? `Error: Bundling did not produce any output. Check that content is written to /asset-output.` ### Environment - **CDK CLI Version :** 1.75.0 - **Framework Version:** 1.75.0 - **Node.js Version:** v15.3.0 - **OS :** Mac Catalina 10.15.7 - **Language (Version):** TypeScript 4.1.2 ### Other If I use an implementation of ILocalBundling that is mostly copied from asset-staging.ts but calls both `run` and `cp` the synth works but I don't believe that should be necessary: ``` class LocalBundling implements ILocalBundling { tryBundle(outputDir: string, options: BundlingOptions): boolean { let user: string; if (options.user) { user = options.user; } else { // Default to current user const userInfo = os.userInfo(); [Truncated] command: options.command, user, volumes, environment: options.environment, workingDirectory: options.workingDirectory ?? AssetStaging.BUNDLING_INPUT_DIR, }); options.image.cp(AssetStaging.BUNDLING_OUTPUT_DIR ?? outputDir, outputDir); return true; } } ``` --- This is :bug: Bug Report Answers: username_1: https://github.com/aws/aws-cdk/blob/cbe7a10053ce0e4e766f360cf8792f0b46c565f0/packages/%40aws-cdk/core/lib/bundling.ts#L178-L181 `/asset-input` and `/asset-output` (with mounted volumes) works when `run`ning the container not when building it. username_0: So what is the appropriate use of the default bundling behavior? I would assume the snippet in step 2 above would work without needing to implement `ILocalBundling` to use the `cp` method. username_1: The default behavior is to run a command in a container where the source path is mounted at `/asset-input` and during the execution it should put content at `/asset-output`. You can see examples here: * https://github.com/aws/aws-cdk/blob/v1.77.0/packages/%40aws-cdk/aws-s3-assets/test/integ.assets.bundling.lit.ts * https://github.com/aws/aws-cdk/blob/v1.77.0/packages/%40aws-cdk/aws-lambda-python/lib/bundling.ts * https://github.com/aws/aws-cdk/blob/v1.77.0/packages/%40aws-cdk/aws-lambda-nodejs/lib/bundling.ts username_0: Gotcha, so anything put in `/asset-output` by the Dockerfile will be cleared out and it is assumed that the command will be the one to transfer the required bundling files? username_1: yes, you can do whatever you want as long as you put content in `/asset-output` at some point, you have access to the original asset in `/asset-input`. A trivial example would be `cp -R /asset-input/* /asset-output` username_2: I also ran into a situation where I just wanted to use some content from the built image as the asset output. I think our APIs can probably offer a better experience for this. 1. In this case the asset input is meaningless. 2. Ideally `docker cp` will be much faster to extract files from the built image as oppose to running a command inside the image. @username_1 what do you think? username_1: You can already do this: ```ts const assetPath = '/path/to/my/asset'; const image = cdk.BundlingDockerImage.fromAsset('/path/to/docker'); image.cp('/path/in/the/image', assetPath); new lambda.Function(this, 'Fn', { code: lambda.Code.fromAsset(assetPath), runtime: lambda.Runtime.NODEJS_12_X, handler: 'index.handler', }); ``` Is it this that you want to improve? `docker cp` is of course faster but has different use cases. username_0: My issue was that the Dockerfile put everything needed in `/asset-output` but when the container ran that folder was empty. I am now putting everything in a folder named `/asset-stage` and passing `cp -r ../asset-stage ../asset-output` to copy everything from stage to output. What I would like to see improved is either better documentation around the behavior of the `asset-output` folder or just simply taking what's already in there instead of wiping it before bundling. username_2: ```ts const image = BundlingDockerImage.fromAsset('/path/to/docker'); new lambda.Function(this, 'Fn', { code: lambda.Code.fromAsset(image.fetch('/path/in/the/image')), runtime: lambda.Runtime.NODEJS_12_X, handler: 'index.handler', }); ``` I think the API for `BundlingDockerImage` can be improved: ```ts const image = Docker.build('/path/to/docker'); const tmpdir = image.cp('/path/in/the/image'); // alternatively, users can specify the destination for "cp" image.cp('/path/in/the/image', tmpdir); ``` And then, we can also add something like: ```ts new lambda.Function(this, 'Fn', { code: lambda.Code.fromDockerBuildAsset('/path/in/the/image'), runtime: lambda.Runtime.NODEJS_12_X, handler: 'index.handler', }); ``` username_1: @username_2 `error JSII5016: Members cannot be named "build" as it conflicts with synthetic declarations in some languages.` username_2: Haha... so perhaps `Docker.fromBuild()`? username_1: yes username_1: shouldn't this be `lambda.Code.fromDockerBuildAsset('/path/to/docker', buildOptions)` and the asset is supposed to be located at `/asset` in the image?
tidyverse/dplyr
352528090
Title: auto print bug when tibble is loaded in a dplyr vignette Question: username_0: When this vignette inside dplyr loads the tibble 📦, we get : ``` Building dplyr vignettes Quitting from lines 11-13 (a.Rmd) Error: processing vignette 'a.Rmd' failed with diagnostics: subscript out of bounds ``` Not sure what this is. The `subscript out of bounds` message seems to come from R internals. Might be a markdown, dplyr, tibble, or something else issue. Minimal vignette to trigger the bug: ```` --- title: "autoprint bug" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{auto print bug} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} library(tibble) tibble(a = 1:10) ``` ```` Answers: username_1: Did this get resolved? username_0: apparently not, I've rebased #4024 and sent a job yesterday but it still fails :/ https://travis-ci.org/tidyverse/dplyr/jobs/537666716#L131 username_1: Resolved yet? Status: Issue closed
karma-runner/karma-chrome-launcher
190040172
Title: Very slow tests when the browser is minimized Question: username_0: Hi. I hope this is a good place to report an issue that I am having. Ok so as I said in the title, I have a problem with very slow test when chrome browser is minimized. Take a look at these times: // minimized browser ``` Chrome 54.0.2840 (Windows 10 0.0.0): Executed 19 of 19 SUCCESS (8.607 secs / 1.204 secs) TOTAL: 19 SUCCESS ``` // open browser ``` Chrome 54.0.2840 (Windows 10 0.0.0): Executed 19 of 19 SUCCESS (0.406 secs / 0.272 secs) TOTAL: 19 SUCCESS ``` I would like to know why is this happening and if I can fix this. Is this some bug or have i set something wrong in my karma config? (although i don't know what it would be). But there is my config: ``` import path from 'path'; import Config from 'webpack-config'; import yargs from 'yargs'; const { watch } = yargs.argv; const projectRoot = path.resolve(__dirname, '../../'); const webpackConfig = new Config().extend({ 'config/webpack.config.base': (config) => { // no need for app entry during tests delete config.entry; // make sure isparta loader is applied before eslint config.module.preLoaders = config.module.preLoaders || []; config.module.preLoaders.unshift({ test: /\.js$/, loader: 'isparta', include: path.resolve(projectRoot, 'src'), }); // only apply babel for test files when using isparta config.module.loaders.some((loader) => { if (loader.loader === 'babel') { loader.include = path.resolve(projectRoot, 'test/unit'); return true; } return false; }); return config; }, }).merge({ resolve: { alias: { vue: 'vue/dist/vue.js', src: path.join(projectRoot, 'src'), }, }, devtool: 'cheap-module-source-map', vue: { [Truncated] "mocha": "^3.1.2", "phantomjs-prebuilt": "^2.1.13", "postcss-loader": "^1.1.1", "require-dir": "^0.3.1", "sass-loader": "^4.0.2", "sinon": "^2.0.0-pre.4", "sinon-chai": "^2.8.0", "style-loader": "^0.13.1", "svg-sprite-loader": "0.0.31", "svgo-loader": "^1.1.2", "url-loader": "^0.5.7", "vue-loader": "^9.9.5", "vue-style-loader": "^1.0.0", "webpack": "2.1.0-beta.22", "webpack-config": "^6.2.1", "webpack-dev-middleware": "^1.8.4", "webpack-hot-middleware": "^2.13.2", "yargs": "^6.4.0" } ``` Answers: username_1: Originally thought it was due to the test runner I was using, but after further investigation, it's definitely looking like a `karma` / `Chrome` issue. I haven't tested other browsers to see if this appears with them. With browser window in foreground: ``` Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 900 of 900 SUCCESS (15.354 secs / 15.257 secs) ``` With browser in background: ``` Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 1 of 1 SUCCESS (1.713 secs / 0.278 secs) Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 2 of 2 SUCCESS (3.374 secs / 1.885 secs) Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 3 of 3 SUCCESS (3.143 secs / 3.137 secs) Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 4 of 4 SUCCESS (3.653 secs / 3.649 secs) Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 5 of 5 SUCCESS (5.01 secs / 5.004 secs) ... Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 90 of 90 SUCCESS (1 min 35.316 secs / 1 min 34.14 secs) ``` As you can see, it increases ~1sec per test in watch mode with the browser in the background. To test, I used the following: ```ts for (let i = 0; i < 90; i++) { it(`test: ${i}`, async(() => { expect(false).toBeTruthy(); })); } ``` It should be noted I am using `zone.js`'s patch for `jasmine` and `async` from `@angular/core/testing`. That said, I ran a few more tests with and without `async`. Here are the results: Foreground - With `async` ``` Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 900 of 900 SUCCESS (8.084 secs / 7.947 secs) ``` - Without `async` ``` Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 900 of 900 SUCCESS (6.681 secs / 6.568 secs) ``` Background - With `async` (had to cancel) ``` Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 145 of 900 SUCCESS (0 secs / 2 mins 31.986 secs) ``` - Without `async` ``` Chrome 55.0.2883 (Mac OS X 10.12.0): Executed 900 of 900 SUCCESS (46.051 secs / 44.849 secs) ``` // related issue link https://github.com/angular/angular-cli/issues/4071 username_2: Chrome started throttling background tabs/browsers more aggressively. See https://www.chromestatus.com/feature/6172836527865856 username_3: @username_2 the aggressive throttling is only enabled by default on Chrome 57. The versions in question are 54 and 55. Still, I'm pretty sure this is not directly related with `karma-chrome-launcher`, but I'll investigate a bit further to confirm this. username_4: I ran into this issue with Chrome 57.0.2987. For me my karma tests would finish almost immediately, unless I tried to run them using the DEBUG tab in chrome. The same tests took almost a minute to complete. not good. karma-chrome-launcher was not the problem, but it did offer a solution for me. I added this to my karma.conf.js: ``` customLaunchers: { Chrome_without_background_throttle: { base: 'Chrome', flags: ['--disable-background-throttling'] } }, ``` and then started karma like this: ``` $ karma start src/test/javascript/karma.conf.js --debug --browsers=Chrome,Chrome_without_background_throttle --autoWatch=true ```
NASA-PDS/pds-api
815843476
Title: initiate a proposal for the AI WG and ENG team Question: username_0: @jordanpadams create a mind map to present possible parameters mappings from search to pds4 model. See https://mm.tt/1794840971?t=IZhdMVAiMV Answers: username_0: @jordanpadams create a mind map to present possible parameters mappings from search to pds4 model. See https://mm.tt/1794840971?t=IZhdMVAiMV username_0: Architecture solution could be: 0) add search parameters in registry as harvest/registry-mgr step 1) add search parameters in registry as a post processing step 2) add search parameters translation in the API implementation (see [search, including semantics .pdf](https://github.com/NASA-PDS/pds-api/files/6039144/search.including.semantics.pdf) ) Option 0 is not kept for now since the mapping between search parameters and pds4 properties can be updated. Status: Issue closed
EvotecIT/PSWriteHTML
438977084
Title: How to customize tabs? Question: username_0: I am testing PSWriteHTML and there isn't a whole lot of documentation, I checked the blogs and searched the code as well; is there a way to customize the tabs css/js? I don't want the on-click effect or the fa-bomb icon but wasn't able to see any way to change it through parameters? Here is my example.. ` Install-Module PSWriteHTML -Force Import-Module PSWriteHTML -Force $CSV = Import-Csv -Path C:\IT\process.csv New-HTML -TitleText "Infrastructure Report" -UseCssLinks:$false -UseJavaScriptLinks:$false -Author "test" -FilePath C:\IT\infra.html -ShowHTML { New-HTMLTab -TabName 'TESTING' -TabHeading "testing" { New-HTMLContent -HeaderText 'Content Text' -CanCollapse { New-HTMLTable -DataTable $CSV -Buttons @('excelHtml5','csvHtml5') -HideFooter } } New-HTMLTab -TabName 'TESTING' -TabHeading "testing" { New-HTMLContent -HeaderText 'Content Text' -CanCollapse { New-HTMLTable -DataTable $CSV -Buttons @('excelHtml5','csvHtml5') -HideFooter } } } ` Answers: username_1: I would like to give you good news but I can't. There's like 0 configuration for Tabs. The bomb is there kind of by accident as I wanted to load/allow all icons from Fonts Awesome to be available but failed with providing a guide. So I stuck with a bomb for a starter. The idea is that you should be able to pick None or one of 5xxx icons per each tab. The idea of on-click effect - well that isn't configurable either. But you right, it should be. Seeing as other person requested Tabs within Tabs, Tabs require a rewrite. I just need to find good looking, So I need to find "free" CSS/js code with nice looking tabs within tabs that can be easily adapted and change things around. As for custom CSS/js, it's a bit tricky. There are lots of moving parts in PSWriteHTML and touching one thing may break other stuff. I would prefer people doing some research over PSWriteHTML and help out with some choices/configuration. The only thing I need is for it to be useful and good looking (to an extent). username_1: I'm working a bit on the tabs: ![image](https://user-images.githubusercontent.com/15063294/59214906-53578a00-8bb8-11e9-84e1-f96491185304.png) You'll be able to choose one of 1000+ icons. ```powershell Import-Module .\PSWriteHTML.psd1 -Force $Test = Get-Process | Select-Object -First 5 #$Test | Out-HtmlView New-HTML -TitleText 'My title' -UseCssLinks:$true -UseJavaScriptLinks:$true -FilePath $PSScriptRoot\Example22.html -Show { New-HTMLTab -IconBrands aws -Name 'Test 1' -IconColor DarkGoldenrod { New-HTMLTable -DataTable $Test -PagingOptions @(50, 100, 150, 200) { New-HTMLTableButtonPDF -PageSize TABLOID -Orientation portrait New-HTMLTableButtonExcel New-HTMLTableCondition -Name 'HandleCount' -Type number -Operator gt -Value 300 -BackgroundColor Yellow New-HTMLTableCondition -Name 'ID' -Type number -Operator gt -Value 16000 -BackgroundColor Green New-HTMLTableCondition -Name 'Name' -Type string -Operator eq -Value 'browser_broker' -BackgroundColor Gold -Row } } New-HTMLTab -Name 'Test 2' -IconSolid address-card { } New-HTMLTab -Name 'Test 3' -IconSolid camera { } New-HTMLTab -Name 'Test 4' -IconBrands microsoft { } New-HTMLTab -Name 'Test 5' -IconRegular calendar { } New-HTMLTab -Name 'Test 6' -IconSolid yin-yang { } } ``` You will be able to define a different color for icon and different for text. Same for size. Also, no icon is a possibility. Status: Issue closed
haml/haml
811366843
Title: HAML+Rails can lead to doubly escaped attributes Question: username_0: This behavior is new with HAML 5+. It was introduced with #1028, which is related to CVE-2016-6316. Here is a test case: ```ruby # in rails console Haml::Engine.new("%p{title: html_escape('hi &')}").render => "<p title='hi &amp;amp;'></p>\n" ``` In our Rails app it's common to write code like so: ```haml %meta{ property: "og:title", content: content_for(:og_title) || "App Name" } ``` This code used to work great, because HAML helpfully determined whether or not the value is html_safe and escaped as necessary. The new behavior escapes regardless, resulting in double escaping. Potential fixes: 1. Fix HAML to only escape quotes in this case (this is what Rails does). See https://github.com/rails/rails/compare/v4.2.7...v4.2.7.1 1. Encourage Rails users to use `Haml::Template.options[:escape_attrs] = :once`. (is this correct?) 1. Encourage Rails users to manually unescape HTML before passing to HAML. This seems dangerous. 1. Encourage Rails users not to co-mingle safe/unsafe attribute values. Difficult to enforce or detect. Do I have that correct? I've spent a few hours investigating but I could easily have missed something important. Answers: username_1: Seems like this is what you want. Because I haven't seen a case where it's needed and the underlying escape method used by `true` is way faster than one for `:once` (https://github.com/ruby/ruby/pull/1164 https://github.com/ruby/ruby/pull/2226), I'm reluctant to encourage every Rails user to use it. However, maybe we could improve the documentation for use cases like yours. username_0: Thanks Takashi. `content_for` always escapes, and we use it inside our templates to supply content to the layout. `og:title` is one example where attribute escaping is causing problems, but we have several others. I think this is the ActionView code that illustrates the behavior of content_for: https://github.com/rails/rails/blob/main/actionview/lib/action_view/helpers/capture_helper.rb https://github.com/rails/rails/blob/main/actionview/lib/action_view/flows.rb I wonder if anyone else is encountering this issue. `content_for` is a common pattern, and the behavior is subtle. Escaping issues are difficult to diagnose. Maybe it would be better to only escape quotes the way that Rails does. username_1: Now I get your use case, but I don't think we should escape quotes in Haml's case. The most important question is, do you really want to doubly escape `'` and `"` in og:title content? I think you would like to escape it only once, meaning you need to either use `Haml::Template.options[:escape_attrs] = :once` or unescape it if you use `content_for` to get og:title content. Either way, could you try `Haml::Template.options[:escape_attrs] = :once`? I think it solves your problem, while escaping only quotes may not solve it perfectly. username_0: We are experimenting with `:once` and it fixes the issue. Should we include a suggestion in the HAML docs somewhere? This `content_for` approach is probably pretty common. Ideally HAML wouldn't html_escape attribute values if they were already marked safe. I think the CVE was intended to address things like `sanitize` or manually calling `html_safe`. The Rails fix uses `value.gsub(/"/, '&quot;')`, which avoids the XSS issue without accidentally double escaping. Regardless, I'm quite happy with the fix :) Totally up to you if you want to take further action. Thank you for your help and your work on HAML! Status: Issue closed username_1: :+1: Updated the document https://github.com/haml/haml/commit/77ccce61059ac2e818440849e586dce20b651f8c.
blockonomics/prestashop-plugin
296629582
Title: Link to solution article when new address generation fails Question: username_0: Modify message _Unable to generate bitcoin address. Note for site webmaster: Your webhost is blocking outgoing HTTPS connections. Blockonomics requires an outgoing HTTPS POST (port 443)..._ to link to solution article https://blockonomics.freshdesk.com/support/solutions/articles/33000215104-troubleshooting-unable-to-generate-new-address<issue_closed> Status: Issue closed
Willenbrink/PaperTerminal
435537776
Title: Consider Rewrite of Driver in OCaml Question: username_0: Rewriting the driver in OCaml has the benefit of using only a single language. The performance impact of this should also be negligible because no weird pointer-arithmetic is used in C. The arrays used in C can also be translated to OCaml efficiently.
atata-framework/atata
500954851
Title: Why properties with type inherited from PageObject<T> aren't initialized in page? Question: username_0: Hi @username_1 . I have one question. What logical difference between PageObject<TOwner> and Page<TOwner>? And what is PageObject<TOwner> in your project? For me page object is logical part of page with own business logic. But in your framework page properties with types inherited from PageObject won't be initialized. Could you explain me that? Answers: username_1: Hey Oleg, `PageObject` - is a base class for page objects. `Page` is a page object class inherited from `PageObject` and representing the whole HTML page. There is also [`PopupWindow`](https://atata.io/components/#popupwindow) class which can be used as a base class for popup window page objects. Any way there should not be problems to inherit directly from `PageObject`. It's absolutely valid. Just recommended to define some `PageObjectDefinition` for this class with base XPath. Could you share your page object class which get properties uninitialized? username_0: Look, I have next page object and controls: ```csharp [Url("/")] public class DevelopersHomePage : Page<DevelopersHomePage> { [FindById("build-profile-div")] public BuildProfilePageObject BuildProfilePageObject { get; set; } [FindById("build-profile-div")] public BuildProfileControl BuildProfileControl { get; set; } } public class BuildProfilePageObject : PageObject<BuildProfilePageObject> { [FindByClass("title")] public Content<string, BuildProfilePageObject> Title { get; set; } [FindByClass("description")] public Content<string, BuildProfilePageObject> Description { get; set; } } public class BuildProfileControl : Control<DevelopersHomePage> { [FindByClass("title")] public Content<string, DevelopersHomePage> Title { get; set; } [FindByClass("description")] public Content<string, DevelopersHomePage> Description { get; set; } } ``` In case getting ```BuildProfile``` as ```PageObject``` I get null, but using ```BuildProfile``` as Control I get initialized object. ![image](https://user-images.githubusercontent.com/28996736/66036641-6c44cc00-e516-11e9-8770-264b4645f502.png) Thanks for answer! username_1: Got it. Right, `PageObject` as part of other `PageObject` will not be initialized. In the concepts of Atata, there should be a single active page object which can consist of controls, data providers and UI component parts; but not other page objects. You have to switch to another page object using `Go.To` or by using controls like `Link`, `Button`, etc. Some vast section of controls can be extracted to a separate control class and contain a set of its controls. It can look like hierarchy where the root is a page object and controls are nodes. username_0: Clear, I understand. Thanks a lot of your describing. Status: Issue closed
DataBiosphere/firecloud-app
348360655
Title: Cromwell 34 updates for Aug 8 release Question: username_0: Proposed FC testing plan, based on PR: * Put it on a FIAB "A" and run a smoke test * Observe that /stats returns 403 on FIAB A * Test validation changes on FIAB A by running the specific workflow 100x ** Run the same test on a dev FIAB "B" without these changes to see that the problem does exist there * Test that stdout/stderr does update on FIAB A and does not on FIAB B * Doug tests Martha scopes on FIAB A or after merging to dev, as appropriate * Gary runs the alpha-perf test after merging to alpha Answers: username_0: Proposed FC testing plan, based on PR: * Put it on a FIAB "A" and run a smoke test * Observe that /stats returns 403 on FIAB A * Test validation changes on FIAB A by running the specific workflow 100x ** Run the same test on a dev FIAB "B" without these changes to see that the problem does exist there * Test that stdout/stderr does update on FIAB A and does not on FIAB B * Doug tests Martha scopes on FIAB A or after merging to dev, as appropriate * Gary runs the alpha-perf test after merging to alpha username_1: :+1: for test plan, assuming we (Ruchi) has appropriate access to monitor CPU on FIAB A username_0: Cromwell version hotfix PRs: https://github.com/broadinstitute/firecloud-develop/pull/1299 broadinstitute/agora#263 broadinstitute/rawls#972 Cromwell config changes: https://github.com/broadinstitute/firecloud-develop/pull/1296 Status: Issue closed
pigeonflight/unep
48821351
Title: Fix the front of the meeting Question: username_0: ![image](https://cloud.githubusercontent.com/assets/31827/5050881/184a76d4-6c00-11e4-9a79-eae8b678722e.png) Answers: username_0: need to determine the best name for the "quick link" should probably not be "information note"
auth0-samples/auth0-react-samples
665625989
Title: Getting Oops... Unauthorized after signup Question: username_0: Downloaded the sample and did `npm i` and `npm start`. Was able to sign up and sign in successfully according to the logs. However, after clicking on the green tick when prompted to grant permission, it kept display a blank page with `Oops... Unauthorized`. Log type `Failed Exchange` Log as below: ``` { "date": "2020-07-25T17:37:08.413Z", "type": "feacft", "description": "Unauthorized", "connection_id": "", "client_id": "--removed-- "client_name": null, "ip": "192.168.127.12", "user_agent": "Chrome 84.0.4147 / Mac OS X 10.15.5", "details": { "code": "*************BA6" }, "hostname": "--removed--", "user_id": "", "user_name": "", "log_id": "90020200725173713598000178280607258323126144419732914178", "_id": "90020200725173713598000178280607258323126144419732914178", "isMobile": false } ``` Answers: username_1: I also have this issue. username_2: @username_0 Please ensure that you have filled in the "Allowed Web Origins" field when setting up your Auth0 client - this is necessary for the app to work. https://auth0.com/docs/quickstart/spa/react#configure-allowed-web-origins username_3: I have the same issue. I just downloaded the sample, updated the config and ran it. the app logs in but fails with a page "error .. unauthorized" checking the logs in auth0 i see two entries ![image](https://user-images.githubusercontent.com/7451319/89454381-7c6c9f80-d715-11ea-93eb-36fa8f5a7040.png) username_4: I'm also getting this issue. @username_2 I did add `http://localhost:3000` to "Allowed Web Origins" (as well as to "Allowed Callback URLs" and "Allowed Logout URLs". I see the same logs as @username_3. username_5: I have the same issue with using `Default App`, but it's work with a new application. username_6: This worked for me! Thanks! username_2: Thanks! Closing this for now as it's a known issue we're dealing with on the server side and the fix is posted above. Would be good to continue to hear from people where the fix does not work. Status: Issue closed username_7: I would recommend to create a new account and run your own example. This is my on-boarding experience till now: 1. As a new user I have of course the "Default App". 2. Start at https://auth0.com/docs/quickstart/spa/react downloading the code. The site works fine locally, but the login feature doesn't work. It's complaining about the default value in the `audience` field. Okay, figuring out on the next page https://auth0.com/docs/quickstart/spa/react/02-calling-an-api what to fill in there. 3. After logging in with two different methods (because I read somewhere that the social login with Google might not work) I still get `Oops... Unauthorized`. 4. Quite a lot of the code is not the same as with the example app I have downloaded. Weird, why is it not updated? I see `ProfileComponent` rather than `Profile`, etc. 5. Start hunting down the above cryptic message on the internet and ending up here. It doesn't work with "Default App"... Mmm... That's not the on-boarding experience you want to have. If that's the case, make sure someone first creates a new app and disallow the "Default App" from the drop-down box in the beginning. I have now solved it by indeed changing the settings in "Default App" as described: - Token Endpoint Authentication Method is **disabled** - Change Application Type to **Regular Web Application**, this enables the above option - Change Token Endpoint Authentication Method to **None**. - Change Application Type to **Single Page Application**, don't forget to save. I have the feeling auth0 was really good in the onboarding experience, but there are no resources anymore for this. Please, use my extended comment as hopefully constructive feedback. I'm not a disgruntled user. I'm honestly just a bit surprised (also on this issue just being closed).
Connoropolous/Hogan
54923448
Title: Address issues found in hogan-2.0.0.js [Opened by bitHound] Question: username_0: "\n bitHound discovered <span data-bh-inline-stats=\"\">11 Lint Issues, and 3 Complex Functions </span>\n \n in <a href=\"https://github.com/username_0/Hogan/blob/39b32fda5940bb920c3df8340a63739637ed3723/hogan-2.0.0.js\">hogan-2.0.0.js</a>. For details go to <a href=\"https://localhost:8443/username_0/Hogan/blob/39b32fda5940bb920c3df8340a63739637ed3723/hogan-2.0.0.js\">bitHound</a>.\n "<issue_closed> Status: Issue closed
opnsense/plugins
462044321
Title: Wireguard Peer "Allowed IPs" instead of "Tunnel Address" Question: username_0: Hi, i wanted to suggest a change on the nomenclature of the wireguard Peer configuration. On the peer config page instead of having the usual "Allowed IPs" (present even on openwrt) it was renamed to "Tunnel Address" when the original it's a better explanation of what that config is used for. Please revert back to the standard wireguard nomenclature. ![Annotation 2019-06-28 143818](https://user-images.githubusercontent.com/8084245/60342700-7654ad00-99b2-11e9-82e2-09b3407a16a5.png) Answers: username_0: Even with that in mind i think it's still wrong, in fact that caused me a lot of confusion when i saw people setting up a 0.0.0.0/24 "tunnel address" when in fact they where just allowing all the IPs. Btw i am migrating from OpenVPN to wireguard too. username_1: I want to quickly point out that "thinking that something is wrong" is not a great way to have an open discussion. To have a more positive note, what does the help text say and how could it be improved? username_0: Sorry i was trying to be polite, didn't want to make any accusation whatsoever, being disrespectful or impose my idea as absolute. Sorry if i've offended in any way. The description state: `List of addresses to configure on the tunnel adapter. Please use CIDR notation like 10.0.0.1/24.` I personally will replace "Tunnel address" with the official "Allowed IPs" and the description i would place something like "List of addresses allowed to pass trough the wireguard tunnel. Please use CIDR notation like 10.0.0.1/24" For me this is the best approach to the matter especially to avoid confusion between systems. username_2: @username_0 let's make a deal, I'll change it to Allowed IPs and set in helptext the old "Tunnel Address" and you check the documentation pages wireguard* an replace **Tunnel Address** with **Allowed IPs**? https://github.com/opnsense/docs/tree/master/source/manual/how-tos username_0: @username_2 without bothering you, ill try to make a PR with all the changes. Luckily the code is good written, way better than asuswrt so it should be kinda easy to do. Btw to be clear the changes are just related to the endpoint config. Referring to tunnel address on the server config it's correct. username_0: Ok i did create a PR, please could you check that everything it's alright? https://github.com/opnsense/plugins/pull/1388 Status: Issue closed username_2: @username_0 I already updated the docs (in review)
jOOQ/jOOQ
740048615
Title: KotlinGenerator produces compilation error in generated interfaces when <jpaAnnotations/> is set Status: Issue closed Question: username_0: One solution would be to generate `@get:Column` and `@get:Id`, etc. annotations instead of `@Column` and `@Id`, etc. As shown here: https://stackoverflow.com/a/56225025/521799 Answers: username_0: One solution would be to generate `@get:Column` and `@get:Id`, etc. annotations instead of `@Column` and `@Id`, etc. As shown here: https://stackoverflow.com/a/56225025/521799 username_0: We'll use this prefix syntax everywhere, for consistency reasons. username_0: Fixed in jOOQ 3.15.0 and 3.14.4 (#10914) Status: Issue closed
ionic-team/ionic-framework
719777456
Title: bug: Vue IonContent scrollToTop is undefined Question: username_0: # Bug Report **Ionic version:** <!-- (For Ionic 1.x issues, please use https://github.com/ionic-team/ionic-v1) --> <!-- (For Ionic 2.x & 3.x issues, please use https://github.com/ionic-team/ionic-v3) --> [ ] **4.x** [x] **5.x** **Current behavior:** <!-- Describe how the bug manifests. --> I call IonContent scrollToTop method and console expections is thrown: "Uncaught TypeError: content.value.scrollToTop is not a function" **Expected behavior:** <!-- Describe what the behavior would be without the bug. --> IonContent method scrollToTop works. **Steps to reproduce:** <!-- Please explain the steps required to duplicate the issue, especially if you are able to provide a sample application. --> 1. Create ref of IonContent - <ion-content ref="content">...</ion-content> 2. Scroll page to down 2. Call content.value.scrollToTop() **Related code:** ``` <template> <ion-content :fullscreen="true" scrollEvents ref="content"> <div style="height: 200vh" @click="scrollToTop"></div> </ion-content> </template> <script> import {IonContent} from "@ionic/vue"; import {ref} from "vue"; export default { name: "Test", components: { IonContent, }, setup(){ const content = ref(); return { content, scrollToTop: () => content.value.scrollToTop() } } }; </script> ``` **Other information:** <!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. --> **Ionic info:** <!-- (run `ionic info` from a terminal/cmd prompt and paste output below): --> ``` [Truncated] Ionic CLI : 5.4.16 (C:\Users\Damian\AppData\Local\Yarn\Data\global\node_modules\ionic) Ionic Framework : @ionic/vue 0.5.2 Capacitor: Capacitor CLI : 2.4.2 @capacitor/core : 2.4.2 Utility: cordova-res : not installed native-run : not installed System: NodeJS : v12.19.0 (C:\Program Files\nodejs\node.exe) npm : 6.14.8 OS : Windows 10 ``` Answers: username_1: Thanks for the issue. You need to access the method on `$el`, but I will look into making it so you can access the method on the Vue component directly: ```js content.value.$el.scrollToTop() ``` username_1: I think the best way to do this is to do what Ionic React does and "forward" the ref to the web component. However, it does not seem like this is possible in Vue right now: https://forum.vuejs.org/t/set-which-element-is-referenced-if-ref-is-set-from-parent/31480. An alternative could be to define the web component methods on the Vue component, then have the Vue component methods call the web component methods, but I do not think that is a sustainable solution. I am going to remove this from the milestone as I don't think a good solution can be developed soon. I will work on documenting this limitation on our docs website.
jvolkman/intellij-protobuf-editor
690957415
Title: Default Path Setting for all the project instead of just current one Question: username_0: Hello, Is there any way I can set some proto directory path to be included by default in all the projects? I have some proto directories that I want to be included in all my projects by default, but currently, each time I open a project first time, I have to include them manually. This feature was supported in https://github.com/protostuff/protobuf-jetbrains-plugin Thank you Answers: username_1: Thanks for the request. This seems reasonable; I'll try to include it when I get around to rewriting the include system generally. Pretty swamped with my day job for the next month or so.
playgameservices/play-games-plugin-for-unity
313456617
Title: Resolver replaces .aar with a folder instead Question: username_0: On MacOS 10.13.4 in Unity 2017.4.1f1, the resolver doesn't copy gpgs-plugin-support-0.9.42.aar from the cache but instead creates a folder named 'gpgs-plugin-support-0.9.42' with a bunch of files in it. My builds are failing with unknown errors (while compiling with IL2CPP) and I'm thinking it's probably because of this. Any ideas? Answers: username_1: If you have the aar in the Plugin/Android folder as well, you could always try deleting the folder. I have never really trusted the automated install processes from these scripts. On Mac, they have always been buggy at best. I use an older version of Google Play Games (as required by my own plugins) and the automated process attempted to install new versions of GPSG and support right alongside the existing ones with no warning or cleanup. username_0: The moment I put the original .aar file back in the plugin folder, the resolver deletes it again, so.... that’s not a solution unfortunately. username_0: Still seeing this issue on mac. Fingers 🤞 crossed Google will look at this issue.
jaegertracing/jaeger
839581367
Title: Error/warn if TLS flags are used when tls.enabled=false Question: username_0: ## Problem - what in Jaeger blocks you from solving the requirement? The user was, rightfully, confused when the `--es.tls.skip-host-verify` was set but wasn't working. ## Proposal - what do you suggest to solve the problem or improve the existing situation? Log a warning or error if any `--*.tls.*` flag is set when `--*.tls.enabled == false`, maybe somewhere [here](https://github.com/jaegertracing/jaeger/blob/master/pkg/config/tlscfg/flags.go#L73) and [here](https://github.com/jaegertracing/jaeger/blob/master/pkg/config/tlscfg/flags.go#L89). ## Any open questions to address Should we log a warning or error (and prevent startup of service)? I would lean towards the latter to be more explicit, though I think it would be considered "breaking" behaviour. The former warning could be easily missed. <!-- Questions that should be answered before proceeding with implementation. --> Answers: username_1: +1 to crash username_2: I would like to pick this one up. Is there a consensus that this incorrect configuration should be fatal? username_0: @username_2 I think an incorrect configuration should be fatal rather than just being logged and my understanding is @username_1 is on the same page (but please correct me if I'm wrong). username_1: Agreed username_3: @username_2 it's yours :-) username_4: is there activity on this issue? if not can I take this over? username_0: @username_2 are you okay with @username_4 taking over this task? username_2: Yes, that’s fine! Sorry for the delay. username_4: it should fail for both clients and sever side if I am not right? username_0: IIUC, the TLS flags should only apply to the server side, and so should fail for just the server side.
spring-projects/spring-framework
550078779
Title: Dose Spring SpEL not support Chinese variables? Question: username_0: **Affects:** Spring-Boot 2.1.3.RELEASE I try to use SpEL execute some code which include some Chinese variables. ``` void test() { SimpleEvaluationContext evaluationContext = SimpleEvaluationContext .forReadWriteDataBinding() .withMethodResolvers(DataBindingMethodResolver.forInstanceMethodInvocation()) .build(); evaluationContext.setVariable("中文", 1); ExpressionParser parser = new SpelExpressionParser(); Expression expression = parser.parseExpression("#中文 == 1"); Boolean value = expression.getValue(evaluationContext, Boolean.class); System.out.println(value); } ``` Exception track: ``` java.lang.IllegalStateException: Cannot handle (20013) '中' at org.springframework.expression.spel.standard.Tokenizer.process(Tokenizer.java:268) at org.springframework.expression.spel.standard.InternalSpelExpressionParser.doParseExpression(InternalSpelExpressionParser.java:127) at org.springframework.expression.spel.standard.SpelExpressionParser.doParseExpression(SpelExpressionParser.java:61) at org.springframework.expression.spel.standard.SpelExpressionParser.doParseExpression(SpelExpressionParser.java:33) at org.springframework.expression.common.TemplateAwareExpressionParser.parseExpression(TemplateAwareExpressionParser.java:52) at org.springframework.expression.common.TemplateAwareExpressionParser.parseExpression(TemplateAwareExpressionParser.java:43) ``` Answers: username_1: For identifiers such as variables, SpEL does not support characters whose integer value is greater than 255. https://github.com/spring-projects/spring-framework/blob/6c2cb8ecf5d1d755f09aff80489aa8b6e49d70b1/spring-expression/src/main/java/org/springframework/expression/spel/standard/Tokenizer.java#L569-L574 However, the [documentation](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#expressions-ref-variables) in the reference manual does not explicitly state this fact. Thus, we can improve the documentation here to be explicit. Status: Issue closed
tailhook/abstract-ns
256003199
Title: Another round of changes Question: username_0: Just quick overview of new traits and helpers (with many details omitted): ```rust struct Name<'x>(&'x str, Option<u16>); trait PollResovler { type Future: Future<Item=Address, Error>; fn resolve(&self, name: Name) -> Self::Future; } trait Resolver: PollResolver { type Stream: Stream<Item=Address, Void>; fn subscribe(&self, name: Name) -> Self::Stream; } struct PollAdapter<R: PollResolver>(R); impl PollResolver for PollAdapter {...} impl Resolver for PollAdapter {...} struct Router {...} impl PollResolver for Router { type Future: oneshot::Receiver<Address>; } impl Resolver for Router { type Stream: tip_channel::Receiver<Addresss>; } ``` Details: 1. BoxStream and BoxFuture in [Resolver](https://docs.rs/abstract-ns/0.3.4/abstract_ns/trait.Resolver.html) has always been temporary hack, let's replace them with associated types 2. We should split out `PollResolver` from `Resolver` 3. What has been done in default method `subscribe` should be done in `PollAdapter`, so it's configurable (updates and poll interval) 4. `Resolver::subscribe` must return infallible stream, we should consider using void crate, or put our own void type or use one from futures [when that is added](https://github.com/alexcrichton/futures-rs/pull/567). This caused pain previously. The idea here is to retry resolving name as long as stream is used. 5. `Router::subscribe` (the structure not trait) should spawn a stream from original resolver stream and connect it through something like a [Tip channel](https://github.com/alexcrichton/futures-rs/pull/570), either from futures crate or vendored type. (or alternatively, use `FuturesUnordered`) 6. `Name` type is a structure # The Problems It Solves 1. Errors in subscriptions streams were pain. It's was unclear when it's valid to return error. 2. Streams become more and more complex, and connection pool should poll the DNS subscription stream on every wake up (this is how futures work). So let's make polled structure very cheap channel. 3. Middlewares can now be made without virtual call overhead 4. Updating `Router` configuration on the fly should work 5. Using bounded or unbounded stream between router and consumer has it's own downsides (bounded: old value may be used at any time in future, unbounded: memory leak can occur), hence the [Tip channel type](https://github.com/alexcrichton/futures-rs/pull/570). 6. Parsing `Name` in the previous version was pain, as well as it [wasn't clear](https://github.com/username_0/abstract-ns/issues/5) that port might be specified. *The original motivation was that if you create a cache, like `HashMap<NameBuf, Address>` it's not possible to `Borrow<Name>` from `NameBuf` (if `NameBuf(String, u16)` and `Name(&str, u16)`), and no clear design for this was foreseen. It turns out, that it's possible to use `OwningRef` for that, with not that big overhead.* # More Enhancements To Do 1. The Router should be able to update the configuration and recheck each connected stream against new domain routing table. Presumably, this requires managing streams by router and connecting them through channel like described above. 2. We should add an abstraction that accepts a `Stream<Item=Vec<Name>,_>` rather than a `Vec<Stream<Item=Address>>` so that we can update the list of names connection pool does connect to, not just resolve already listed names (this allows better on-the-fly configuration reload in servers). # Questions 1. Associated errors? Probably it's too much pain. 2. Should name structure have public fields? Should `NameBuf` be implemented? Maybe `Name`/`NameBuf` should have `Arc<String>` because there is large chance that it will be moved between futures multiple times. Tagging @srijs as you mentioned you have some thoughts. Also @seanmonstar for speaking about https://github.com/hyperium/hyper/pull/1174 Answers: username_0: Well, I currently tend to make Name an `Name(Arc<(String, u16)>)`, because putting it in future means name should be static. The only use case of having a borrowed name is quickly looking into an in-process cache. On the other hand, a clone of the name will probably stay in `Resolver` and in many middlewares, I guess (including `PollAdapter` described above, and `UnionStream`). So making it cheaply cloneable looks like worth it. Unless somebody has a better idea. username_0: Well, when working on the changes I'm messing with port in a wrong way, in particular: * `example.org:1234` means to connect to port `1234` to the IP that `example.org` resolves to, but * `_xmpp-server._tcp.example.org` means to connect to the port whatever SRV record points to * On the other hand `http://example.org` means connect to `example.org` with default port, and solving this case too, was original intention. At the end of the day, we should decouple concepts of default port and the port to connect to in some sensible way. username_0: Okay, just merged into a master branch `host_service_split`. It splits resolver trait into [four](https://github.com/username_0/abstract-ns/blob/23603f8ab563cd518045255f51f9328dbd242786/src/lib.rs#L7-L19) and I'm more or less satisfied with the result. Will publish on crates.io soon. Status: Issue closed
tensorflow/models
339743992
Title: mask is not added for mask_rcnn_inception_v2 Question: username_0: Please go to Stack Overflow for help and support: http://stackoverflow.com/questions/tagged/tensorflow Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy: 1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead). 2. The form below must be filled out. **Here's why we have that policy**: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow. ------------------------ ### System information - **What is the top-level directory of the model you are using**: /models/object_detection - **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: use data_tools/create_pet_tf_record.py with faces_only False (i want mask) - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Debian 9 - **TensorFlow installed from (source or binary)**: install from pip3 - **TensorFlow version (use command below)**: 1.8.0 - **Bazel version (if compiling from source)**: - **CUDA/cuDNN version**: - **GPU model and memory**: - **Exact command to reproduce**: create files as gear_1.jpg /2,3,4.... create files as gear_1.xml create mask file as gear_1.xml path /images/mask/images images/mask/test_images images/mask/annotations/xmls images/mask/annotations/masks put all files into above directory i launch the tf record. then i launch train /eval but mask is not displayed (my mask is juste line who delimiter object not complete object ,juste delimitation). How i can add the mask or see it in eval ? or detection ? You can collect some of this information using our environment capture script: https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh You can obtain the TensorFlow version with python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)" ### Describe the problem Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request. ### Source code / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. Answers: username_0: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/instance_segmentation.md i made all things specified here or in all tutorial without success, ,maybe it miss one command or one configuration but not clear, i use the pet record tf file with my own dataset.. all is ok ,only mask is not displayed , i ask question on stackoverflow but no answer it is really annoying please help me ! username_0: i tried to simply pet dataset record => untar , then data/create_pet_tfrecord.py then train,eval, tensorboard, nothing happen. Anyone to help me please ? username_1: Hi username_0, Have you resolved your issue ? Cause I'm experiencing the exact same trouble on my end and it's getting frustrating. I have also implemented the 1.8 tensorflow version, running the mask_rcnn_inception_v2_coco model (CPU mode). I've done everything as asked in the documentation but despite everything, i can't display masks. Many thanks username_2: Did you set [this flag](https://github.com/username_2/models/blob/master/research/object_detection/dataset_tools/create_pet_tf_record.py#L51) to be false when generating the dataset? It would produce different input files which has mask field. username_0: Yes it is false and in record file i see image/object line... I debugged the pet_record_tf. Py and there is mask, path retrieve etc... All ok... But when i launch train nothing happpen with mask. Is there any specifiq command or parameter else the flag in train or eval. Py ? It is working sith only cpu ? I am lost username_2: Could you share the config you are using? username_0: of course: i made you a zip. i tested with : mask_rcnn_inception_v2_coco_2018_01_28 (download the model) i tried the sample dataset with pet. i tried with only one shard (don't know what is the difference between one or several) i tried with faces_only FALSE /YES i tried with data changed : #nonbackground_indices_x = np.any(mask_np != 2, axis=0) #nonbackground_indices_y = np.any(mask_np != 2, axis=1) nonbackground_indices_x = np.any(mask_np == 1, axis=0) nonbackground_indices_y = np.any(mask_np == 1, axis=1) i had file masks_train.record generated (1 of 10) and you can see there is mask into juste one question when we launch the train with model.ckpt => mask_rcnn_inception_v2_coco_2018_01_28 it is from skratch ? why give this directory it is train from skratch or retrain ? i found answer nowhere. this is the zip you have all files i used (the rest is the tensorflow 1.8.0 with path to slim, research and object detection) thank you very much i don't know what to do now ! [sauv.zip](https://github.com/tensorflow/models/files/2194864/sauv.zip) username_0: with new git pull ... train.py, trainer.py ... are in legacy folder.... i have a copy from object detection folder... when i launch train now i have this error ImportError: cannot import name 'trainer' username_0: i update with your new architecture... folder /legacy , it work with same problem no reference to mask... there is special command or parametre to configure in train.py or eval.py ? just the pipeline.config and the create_pet_recordtf.py ? username_0: hi, guy help me on this thread #3913 , and it was missing one configuration in pipeline.config, number_of_stages: 3 very important,now i launch train and from beginning i see the mask so i have other question but i think i will open a new issue maybe just ask before, how i can change mask color ? and i created outline in yellow, but i see green color and all shape... how i can change this ? thank you. if you tell me to close ticket i will do it username_0: The issue persist. i do not see the mask_loss in eval tensorboard. and the mask is not the outline/png i have made but a complete green mask on the bounding box... any idea ? username_0: hi, in fact i think it does not existe a simly outline, it is segmentation, and outline i never see that in differente examples username_2: Whole box mask typically means that the training just started so the predictor didn't do any job yet. When training for more steps it should become better. username_3: @username_2 I have trained it for 1500 steps but still no shape in the mask at all. I'm pretty sure I do something wrong. I followed a blog where the mask already shows from step 0. See my screenshot and steps at [https://github.com/tensorflow/models/issues/6135](https://github.com/tensorflow/models/issues/6135) username_0: i had same problem after 20000 step on 2 objects with 300 pictures username_4: Hi There, We are checking to see if you still need help on this, as this seems to be considerably old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing. If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this. Status: Issue closed
ga-wdi-exercises/to_oz
248791023
Title: CLI(Diona) Question: username_0: $ cd homework Tue Aug 08 09:45:00 ~/wdi/homework $ ls House Tue Aug 08 09:45:01 ~/wdi/homework $ cd House Tue Aug 08 09:45:10 ~/wdi/homework/House $ touch Dorothy.txt Toto.txt Tue Aug 08 09:45:23 ~/wdi/homework/House $ mkdir Oz Tue Aug 08 09:45:51 ~/wdi/homework/House $ cd Oz Tue Aug 08 09:45:56 ~/wdi/homework/House/Oz $ touch Good_Witch_of_the_North Tue Aug 08 09:46:51 ~/wdi/homework/House/Oz $ touch Wicked_Witch_of_the_East.txt Tue Aug 08 09:47:36 ~/wdi/homework/House/Oz $ ls Good_Witch_of_the_North Wicked_Witch_of_the_East.txt Tue Aug 08 09:47:38 ~/wdi/homework/House/Oz $ touch Good_Witch_of_the_South.txt Tue Aug 08 09:48:13 ~/wdi/homework/House/Oz $ touch Wicked_Witch_of_the_West.txt $ rm Wicked_Witch_of_the_East.txt Tue Aug 08 09:49:46 ~/wdi/homework/House/Oz $ ls Good_Witch_of_the_North Wicked_Witch_of_the_West.txt Good_Witch_of_the_South.txt Tue Aug 08 09:49:47 ~/wdi/homework/House/Oz $ cd House -bash: cd: House: No such file or directory Tue Aug 08 09:50:12 ~/wdi/homework/House/Oz $ cd .. Tue Aug 08 09:50:21 ~/wdi/homework/House $ ls Dorothy.txt Oz Toto.txt Tue Aug 08 09:50:24 ~/wdi/homework/House $ mv Dorothy.txt Oz/ Tue Aug 08 09:50:44 ~/wdi/homework/House $ ls Oz Toto.txt Tue Aug 08 09:50:46 ~/wdi/homework/House $ cd Oz Tue Aug 08 09:50:50 ~/wdi/homework/House/Oz $ ls Dorothy.txt Good_Witch_of_the_South.txt Good_Witch_of_the_North Wicked_Witch_of_the_West.txt $ touch Scarecrow.txt Tin_Man.txt Cowardly_Lion.txt Tue Aug 08 13:09:43 ~/wdi/homework/House $ ls Cowardly_Lion.txt Oz Scarecrow.txt Tin_Man.txt Toto.txt Tue Aug 08 13:09:45 ~/wdi/homework/House $ mkdir Emerald_City Tue Aug 08 13:10:10 ~/wdi/homework/House $ ls Emerald_City Oz Scarecrow.txt Tin_Man.txt Toto.txt Tue Aug 08 13:20:01 ~/wdi/homework/House [Truncated] Tue Aug 08 13:22:29 ~/wdi/homework/House/Oz $ cd .. Tue Aug 08 13:24:00 ~/wdi/homework/House $ ls Emerald_City Oz Tue Aug 08 13:24:02 ~/wdi/homework/House $ cd Emerald_City Tue Aug 08 13:24:13 ~/wdi/homework/House/Emerald_City $ ls Cowardly_Lion.txt Tin_Man.txt Scarecrow.txt Toto.txt Tue Aug 08 13:24:14 ~/wdi/homework/House/Emerald_City $ echo diploma >Scarecrow.txt Tue Aug 08 13:26:33 ~/wdi/homework/House/Emerald_City $ open Scarecrow.txt Tue Aug 08 13:26:39 ~/wdi/homework/House/Emerald_City $ echo heart shaped watch >Tin_Man.txt Tue Aug 08 13:27:15 ~/wdi/homework/House/Emerald_City $ echo medal >Cowardly_Lion.txt Tue Aug 08 13:27:35 ~/wdi/homework/House/Emerald_City<issue_closed> Status: Issue closed
krasa/GrepConsole
955814776
Title: Multiline highlighting not working Question: username_0: Using plugin in AppCode along with `CocoaLumberjack` ``` DDLogDebug(@"Multi\nline\nmessage"); ``` ![Снимок экрана 2021-07-29 в 15 51 15](https://user-images.githubusercontent.com/20910077/127495638-c4d74c7a-861a-4e3f-852e-b9324a2a5aa5.png) Should I enable multiline highlighting in some way to make it work? ![Снимок экрана 2021-07-29 в 15 54 35](https://user-images.githubusercontent.com/20910077/127495768-83c5ffb6-ac2e-4cca-9472-d5c85cbae389.png) Answers: username_1: this ![image](https://user-images.githubusercontent.com/1160875/137619011-8d86a46d-3a6f-4cb0-ae16-52649b678f06.png)
oppia/oppia
218667093
Title: Help tooltip overlay causes unintended disappearance of profile Menu Question: username_0: The tooltip "To get help in future click here" doesn't disappear on the click and causes unintended behavior with profile menu. **How to produce error?** 1. Login in locally with a new username or just use the test user name with admin panel. 2 Create an exploration. This should pop up the tooltip _"To get help in future click here"_ Now, click on the profile icon and try to go to profile page/creator dashboard. **Error** 1. The menu simply disappears and user can't navigate to any page from menu **Expected behavior** 1. The tooltip must be not cut off. 2. It must disappear on click and shouldn't block user from accessing menu. ![screenshot from 2017-04-01 09-16-05](https://cloud.githubusercontent.com/assets/24438869/24575213/e5343194-16bc-11e7-9631-1cdb12dcb709.png) Answers: username_1: Hi!! @username_0 I just want to tell you that this issue is similar to issue #3093 and i am working on it but due to my exams and bad health i was unable to complete it but now i am back on track and i want to work over it again So, i think this issue should be closed because its a similar issue. :) username_0: Sure. Thanks for letting me know. Closing in the view of #3093 Status: Issue closed
toolkit-for-ynab/toolkit-for-ynab
145322089
Title: Move money dialog broken by lastest YNAB update Question: username_0: See YNAB forum post [here](http://forum.youneedabudget.com/discussion/50054/april-1st-updates-have-caused-the-move-money-dialog-to-no-longer-work-with-toolkit-for-ynab). Height of Budget Rows causes the biggest problem. Answers: username_1: Thanks for the report. I've fixed this on my machine and this will be resolved in the next release (today)! Status: Issue closed
ektrah/nsec
419253823
Title: NSEC on Xamarim Forms - Platform Not Supported Question: username_0: Hi, I'm trying to use NSec with Xamarin Forms, but I'm having some trouble. The library doesn't load. Anybody can help me, please? Thanks System.TypeInitializationException: The type initializer for 'BizPay.Crypto.CryptoHelper' threw an exception. ---> System.PlatformNotSupportedException: Could not initialize platform-specific components. NSec may not be supported on this platform. See https://nsec.rocks/docs/install for more information. ---> System.DllNotFoundException: libsodium\n at (wrapper managed-to-native) Interop+Libsodium.sodium_library_version_major()\n at NSec.Cryptography.Sodium.InitializeCore () [0x00000] in <ba1701410b96469f9e9378cadfef57a0>:0 \n --- End of inner exception stack trace ---\n at NSec.Cryptography.Sodium.InitializeCore () [0x0003e] in <ba1701410b96469f9e9378cadfef57a0>:0 \n at NSec.Cryptography.Sodium.Initialize () [0x00007] in <ba1701410b96469f9e9378cadfef57a0>:0 \n at NSec.Cryptography.Algorithm..ctor () [0x00006] in <ba1701410b96469f9e9378cadfef57a0>:0 \n at NSec.Cryptography.SignatureAlgorithm..ctor (System.Int32 privateKeySize, System.Int32 publicKeySize, System.Int32 signatureSize) [0x00000] in <ba1701410b96469f9e9378cadfef57a0>:0 \n at NSec.Cryptography.Ed25519..ctor () [0x00000] in <ba1701410b96469f9e9378cadfef57a0>:0 \n at NSec.Cryptography.SignatureAlgorithm.get_Ed25519 () [0x00009] in <ba1701410b96469f9e9378cadfef57a0>:0 \n at BizPay.Crypto.CryptoHelper..cctor () [0x00000] in /Users/leandrolustosa/Documents/BizWallet/BizPay/BizPay/Crypto/CryptoHelper.cs:13 \n --- End of inner exception stack trace ---\n at BizPay.ViewModels.WalletViewModel+<ExecuteSendTxCommand>d__26.MoveNext () [0x001fd] in /Users/leandrolustosa/Documents/BizWallet/BizPay/BizPay/ViewModels/WalletViewModel.cs:92 \n--- End of stack trace from previous location where exception was thrown ---\n at BizPay.ViewModels.WalletViewModel+<ExecuteSendTxCommand>d__26.MoveNext () [0x000cb] in /Users/leandrolustosa/Documents/BizWallet/BizPay/BizPay/ViewModels/WalletViewModel.cs:78 " Answers: username_1: NSec is not supported on this platform. Status: Issue closed
Scruff72/ColmarAcademy
282475596
Title: Summary Question: username_0: Satisfactory 👍 You did a great job writing a fully mobile responsive website! At this point, all you've got left to do is work on small details. See if you can make your code more efficient or easier to read. The less code the better, and this is especially true in CSS. Also, just for practice you should see if you can make some nice mobile versions of your previous sites. There's no reason you shouldn't leave this program without a full portfolio mobile responsive websites! Answers: username_1: Could you be more specific on the "See if you can make your code more efficient or easier to read. The less code the better, and this is especially true in CSS."? I included generic styles applied to multiple sections of the html so as to reduce or eliminate repetitive use of styles throughout the stylesheet. What's not easy to read?
Nuytemans-Dieter/BetterSleeping
873877535
Title: API addition: Request plugins to set time to day Question: username_0: **Describe the new feature you have in mind** Make it possible for bettersleeping to request time set to day through an event. This way, other plugins that manage time could listen to this event and handle the time being set to day.<issue_closed> Status: Issue closed
spf13/viper
115292183
Title: Default values overwriting everything else Question: username_0: Whenever I try to set a default for any variable, it overwrites the configurations from files. For example, in the following (stripped down but probably working :smile: ) code: ```go func LoadFromFile(path string, name string) error { viper.SetConfigName(name) viper.AddConfigPath(path) err := viper.ReadInConfig() if err != nil { return errors.New("Fatal error reading config file: " + err.Error()) } viper.SetDefault("db.hostname", "localhost") return } ``` Until using `viper.SetDefault("db.hostname", "localhost")`, the variable `db.hostname` has whatever was in the config file. After this command, it always contains "localhost". Am I doing something wrong here, or is it a bug? Answers: username_0: Nevermind, looking at the __closed__ issues I noticed it was fixed a few days ago in #115. After a `go get -u` on this package everything is working as intended. Sorry for the unnecessary ticket! Status: Issue closed
go-gitea/gitea
386570161
Title: Web API: closed_at field null even though ticket is closed. Question: username_0: <!-- 1. Please speak English, this is the language all of us can speak and write. 2. Please ask questions or configuration/deploy problems on our Discord server (https://discord.gg/NsatcWJ) or forum (https://discourse.gitea.io). 3. Please take a moment to check that your issue doesn't already exist. 4. Please give all relevant information below for bug reports, because incomplete details will be handled as an invalid report. --> - Gitea version (or commit ref): bc42b3a - Git version: 2.11.0 - Operating system: - Database (use `[x]`): - [ ] PostgreSQL - [x] MySQL - [ ] MSSQL - [ ] SQLite - Can you reproduce the bug at https://try.gitea.io: - [x] Yes (provide example URL): https://try.gitea.io/api/v1/repos/rakshith-ravi/test/issues/2 - [ ] No - [ ] Not relevant - Log gist: ## Description When closing an issue, the closed_at field in the API request is null even though the ticket is closed. **Example:** {"id":4321,"url":"https://try.gitea.io/api/v1/repos/rakshith-ravi/test/issues/2","number":2,"user":{"id":9467,"login":"Test","full_name":"","email":"<EMAIL>","avatar_url":"https://secure.gravatar.com/avatar/f10679d1743e9ff23ac5dd72d5bf71d8?d=identicon","language":"de-DE","username":"TestJinInQ"},"title":"test","body":"test","labels":[],"milestone":null,"assignee":null,"assignees":null,"state":"closed","comments":0,"created_at":"2018-12-02T15:25:48Z","updated_at":"2018-12-02T15:28:22Z","closed_at":null,"due_date":null,"pull_request":null} closed_at == null. **Expected:** {"id":4321,"url":"https://try.gitea.io/api/v1/repos/rakshith-ravi/test/issues/2","number":2,"user":{"id":9467,"login":"Test","full_name":"","email":"<EMAIL>","avatar_url":"https://secure.gravatar.com/avatar/f10679d1743e9ff23ac5dd72d5bf71d8?d=identicon","language":"de-DE","username":"TestJinInQ"},"title":"test","body":"test","labels":[],"milestone":null,"assignee":null,"assignees":null,"state":"closed","comments":0,"created_at":"2018-12-02T15:25:48Z","updated_at":"2018-12-02T15:28:22Z","closed_at":2018-12-02T15:28:22Z,"due_date":null,"pull_request":null} closed_at == updated_at == 2018-12-02T15:28:22Z Answers: username_1: Could I take a look at this one if no is already working on it? Status: Issue closed
algorithm005-class01/algorithm005-class01
538055517
Title: 【0270_week 01】学习总结 Question: username_0: 第一周明确了学习的强度如何,以后几周的学习可以制定更符合自己的计划。明白了一些刷题的基本套路和整个学习的流程是如何。在刷题的过程中也发现了自己的一些问题,比如说效率低,刷的太慢,忘的太快。刷完一遍再去看,基本就是我不认识他,他不认识我的程度。 数组 Array 连续的内存空间 O(1) 的访问速度,缺陷是 remove 和 insert 都是O(n) 的时间复杂度。 由此,出现了链表 Linked List 内存不连续,由 next 指针指向下一个元素,remove 和 insert 都是 O(1) ,但是查询成了 O(n) 的时间复杂度。 (在这边我们可以思考一下,如何来优化?从什么方面入手?其实就是两点空间换时间、一维升到二维) 又为了解决这个问题,出现了跳表 Skip Set。跳表在链表的原有基础上,通过空间换时间,新增加了多级索引层。那么跳表的查询时间复杂度优化到了 logn,同时空间复杂度到了 O(n)。 空间换时间 一维升到二维 70. Climbing Stairs ``` # n = 3 f(1) + f(2) # n = 4 f(2) + f(3) # n = 5 ... # f(n) = f(n-1) + f(n-2) ``` ``` # 懵逼的时候 # 暴力?基本情况? # 找最近 重复子问题 # if...else # for while recursion ``` 11. Container With Most Water ``` # 一维数组的坐标变换 i, j # 枚举 暴力破解 (x-y)*diff_height O(n^2) # 左右边界 向中间收敛 左右夹逼 O(n) ``` 189. Rotate Array 暴力解法:两两交换 O(n^2) O(1) 利用空间换时间:O(n) 空间复杂度 O(n) 三次反转: O(n) O(1) 21. Merge Two Sorted Lists 迭代法 O(n) 递归:终止条件,递归方程 O(2^n) 88. Merge Sorted Array 先合并数组,再整体排一次序 双指针 从前往后 O(m+n) O(m) 双指针 从后往前 O(m+n) O(1) 1. Two Sum 暴力解法 O(n^2) O(1) map O(n) O(n)
jpush/jpush-react-native
324355323
Title: 个人小建议 Question: username_0: `希望类似的addGetRegistrationIdListener的添加监听方法中加入` static removeReceiveExtrasListener (cb) { if (!listeners[cb]) { return } listeners[cb].remove() listeners[cb] = null }` 类似移除方法中的if(!listeners)判断语句,这样可以避免重复加入多个同一监听事件, 因为在我使用的过程中,去加入对应的移除事件,但没有生效,这样导致会重复跳转多次的情况,可以以此来解决该问题 仅仅是个人的小建议 Answers: username_1: 我反而倒不支持这么做,因为这个是本身自己逻辑处理的问题,就必须在退出的时候取消监听,而不是让程序去判断,养成习惯把
openanalytics/shinyproxy
854279799
Title: TLS cookie without secure flag set Question: username_0: Hi, I've got shinyproxy behind nginx reverse proxy with https. I recently noted that TLS cookie set by shinyproxy is without secure flag set, what is recommended in this case. I think it would be good to have some option in config file to inform shinyproxy that we are behind secure connection. Tested on shinyproxy 2.5.0. Answers: username_1: Hello @username_0 See https://shinyproxy.io/documentation/configuration/#security Best, Tobias username_0: This is what I need, thank you! Status: Issue closed
AlvaroContrerasS/UDM
310479856
Title: CREV1022 - processCreditRenewalSimulation Question: username_0: ### Nombre Servicio processCreditRenewalSimulation ### Tipo Servicio * [x] Canónico * [ ] Orquestado * [ ] Auto-Orquestado * [ ] WebService * [ ] B2B ### Descripcion [Breve descripción del servicio] ### Link sharepoint ### Versions Answers: username_0: Pendiente documentación y trazas desde el proyecto. username_1: Servicio processCreditRenewalSimulation expuesto en Fabrica y Laboratorio. EndPoint: http://192.168.127.12:6106/services/OperCreditRenegRenew/v1.1?wsdl Contrato en SVN.
refinery-platform/refinery-platform
311570579
Title: User-files filter bug Question: username_0: * Specific code commit: 8f071da33e45f263fc6f6d8f1fd43686a7949a81 * Environment where the error occurred (Vagrant VM and site conf mode or AWS instance): Beta site ### Steps to reproduce Please list all the actions and the input data used: 1. Filters in user/files ### Observed behavior ``` Internal Server Error: /api/v2/files/ Traceback (most recent call last): File "/home/ubuntu/.virtualenvs/refinery-platform/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/ubuntu/.virtualenvs/refinery-platform/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view return view_func(*args, **kwargs) File "/home/ubuntu/.virtualenvs/refinery-platform/lib/python2.7/site-packages/django/views/generic/base.py", line 71, in view return self.dispatch(request, *args, **kwargs) File "/home/ubuntu/.virtualenvs/refinery-platform/lib/python2.7/site-packages/rest_framework/views.py", line 466, in dispatch response = self.handle_exception(exc) File "/home/ubuntu/.virtualenvs/refinery-platform/lib/python2.7/site-packages/rest_framework/views.py", line 463, in dispatch response = handler(request, *args, **kwargs) File "/srv/refinery-platform/refinery/user_files_manager/views.py", line 57, in get solr_response = _get_solr(request.query_params, request.user.id) File "/srv/refinery-platform/refinery/user_files_manager/views.py", line 66, in _get_solr user_id=user_id) File "/srv/refinery-platform/refinery/user_files_manager/utils.py", line 59, in generate_solr_params_for_user facets_from_config=True) File "/srv/refinery-platform/refinery/data_set_manager/utils.py", line 680, in generate_solr_params facet_field = insert_facet_field_filter(facet_filter, facet_field) File "/srv/refinery-platform/refinery/data_set_manager/utils.py", line 746, in insert_facet_field_filter ind = facet_field_arr.index(facet) ValueError: u'organism' is not in list GET:<QueryDict: {u'sort': [u''], u'limit': [u'100'], u'filter_attribute': [u'{"organism":["Homo%20sapiens","Mus%20musculus"]}']}>, … 'QUERY_STRING': 'filter_attribute=%7B%22organism%22:%5B%22Homo%2520sapiens%22,%22Mus%2520musculus%22%5D%7D&limit=100&sort=', 'REMOTE_ADDR': '172.30.0.226', 'REMOTE_PORT': '19019', 'REQUEST_METHOD': 'GET', 'REQUEST_SCHEME': 'http', 'REQUEST_URI': '/api/v2/files/?filter_attribute=%7B%22organism%22:%5B%22Homo%2520sapiens%22,%22Mus%2520musculus%22%5D%7D&limit=100&sort=', 'SCRIPT_FILENAME': '/srv/refinery-platform/refinery/config/wsgi_aws.py', 'SCRIPT_NAME': u'', 'SCRIPT_URI': 'http://beta.stemcellcommons.org/api/v2/files/', 'SCRIPT_URL': '/api/v2/files/', 'SERVER_ADDR': '172.30.0.50', 'SERVER_ADMIN': '[no address given]', 'SERVER_NAME': 'beta.stemcellcommons.org', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache/2.4.7 (Ubuntu)', 'mod_wsgi.application_group': 'beta.stemcellcommons.org|', 'mod_wsgi.callable_object': 'application', 'mod_wsgi.enable_sendfile': '0', 'mod_wsgi.handler_script': '', 'mod_wsgi.input_chunked': '0', 'mod_wsgi.listener_host': '', 'mod_wsgi.listener_port': '80', 'mod_wsgi.process_group': 'refinery', 'mod_wsgi.queue_start': '1522890651066341', 'mod_wsgi.request_handler': 'wsgi-script', 'mod_wsgi.script_reloading': '1', 'mod_wsgi.version': (3, 4), 'wsgi.errors': <mod_wsgi.Log object at 0x7f043fcd7d70>, 'wsgi.file_wrapper': <built-in method file_wrapper of mod_wsgi.Adapter object at 0x7f043dededc8>, 'wsgi.input': <mod_wsgi.Input object at 0x7f043fcd7e70>, ``` ### Expected behavior No errors ### Notes See also #2729 and #2931 Answers: username_0: Removed insert_facet_filter method in pull request #2950 Status: Issue closed
STEllAR-GROUP/hpx
89694886
Title: Revert #1535 Question: username_0: I think we should revert #1535, as it can be proven that the necessity of querying a second id_type to a promise is the result of incorrect code. I posted the detailed thought process as a comment of #1535. Answers: username_1: I wouldn't agree to this. I'd rather find a way to make sure the object the users calls get_id() for is still in valid state. username_2: Is this still an issue? I would also like to think that what @username_0 describes in #1535 should always work, even the parts tagged as 'UNSAFE' as the promise should really hold the component alive. If that really is an issue, is there a testcase demonstrating this? username_0: Testcase: ``` C++ // Copyright (c) 2015 <NAME> // // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) #include <hpx/config.hpp> #include <hpx/hpx.hpp> #include <hpx/hpx_start.hpp> #include <hpx/include/iostreams.hpp> int hpx_main(int argc, char* argv[]) { { hpx::id_type promise_id; { hpx::promise<int> p; { auto local_promise_id = p.get_id(); hpx::cout << local_promise_id << hpx::endl; } hpx::this_thread::sleep_for(boost::chrono::milliseconds(100)); promise_id = p.get_id(); hpx::cout << promise_id << hpx::endl; } hpx::this_thread::sleep_for(boost::chrono::milliseconds(100)); // This segfaults, because the promise is not alive any more. // It SHOULD get kept alive by AGAS though. hpx::set_lco_value(promise_id, 10, false); } return hpx::finalize(); } int main(int argc, char* argv[]) { // initialize HPX, run hpx_main. hpx::start(argc, argv); // wait for hpx::finalize being called. return hpx::stop(); } ``` username_1: I still think we should rather make this test case work than to inhibit this kind of thing altogether. username_2: The problem here is in that line: https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/lcos/promise.hpp#L189 upon the first retrieval of the GID, the credit count of the GID in the promise itself is set to zero. When the promise goes out of scope, AGAS thinks the credit is zero and issues the deletion. The promise GID needs to get split for this case in order to keep the promise itself alive. username_0: That's not true, the gid in the promise becomes an unmanaged gid, which is important to prevent it from keeping itself alive, causing a recursive loop preventing any deallocation. If the promise goes out of scope out will get kept alive by the other gid. The problem is if the other gid gets out of scope, then agas forgets about the promise, even if the user issues a second gid. (base problem ist : agas can't be "reminded" to re-learn about a gid once it is forgotten). But as I already said, it can be proven that requesting a second gid is never necessary, so that functionality should be removed. username_2: Why is it never necessary? Can you show the prove? What should happen instead when `promise<T>::get_id()` is being called? We can't invalidate the promise in that case. username_0: 1. Exactly. 2. If you split the credit, there is no way to delete the promise, as it keeps itself alive. 3. The second call to get_id should throw. The basic idea of the proof is, that in order to prevent 1., the user would actively have to keep one promise alive, which he could simple re-use instead of using a new one. username_2: Fixed by merging #1828 Status: Issue closed
taimos/HTTPUtils
909433027
Title: Update to org.apache.httpcomponents:httpclient:jar:4.5.13 Question: username_0: At the moment used version has open issues in OSSIndex: [ERROR] org.apache.httpcomponents:httpclient:jar:4.5.9:compile; https://ossindex.sonatype.org/component/pkg:maven/org.apache.httpcomponents/[email protected]?utm_source=ossindex-client&utm_medium=integration&utm_content=1.1.1 [ERROR] * [CVE-2020-13956] Apache HttpClient versions prior to version 4.5.13 and 5.0.3 can misinterpret ma... (5.3); https://ossindex.sonatype.org/vulnerability/c0ed9602-d5c5-4c45-af48-c757161879ee?component-type=maven&component-name=org.apache.httpcomponents.httpclient&utm_source=ossindex-client&utm_medium=integration&utm_content=1.1.1 Should I make a pull request for it?
cu-mkp/edition-webpages
584561731
Title: add french how-to-use to original how-to-use Question: username_0: @username_1, please give me a line in French that I can add to the English version that says something like: "Want to read this in French? Click here!" but in a way that seems appropriate Answers: username_0: @username_1, please give me a line in French that I can add to the English version that says something like: "Want to read this in French? Click here!" but in a way that seems appropriate username_1: Cliquez ici pour la version française username_0: Merci! username_0: done Status: Issue closed
sudograph/sudodb
822587593
Title: API Design Question: username_0: I am designing the API in the simplest way possible for now...or just the simplest way that works. This might not mean it is best for those consuming it. I will probably tailor it towards mapping really well from GraphQL queries to Sudodb API calls...but putting some good effort into a nice API would probably help people who might want to use Sudodb alone. Then again...Sudograph is really what I want people to be using.<issue_closed> Status: Issue closed
web-dave/ng-essential-workshop
738539942
Title: Create a vehicle-preview Component Question: username_0: * generate a `vehicle-preview` component * declair a Input vehicle * declair a Output vehicleSelected * show Model and Make in template * use `vehicle-preview` in `vehicle-list` template * click on a button should emit a event with the vehicle from `vehicle-preview` to `vehicle.list` Answers: username_0: ### generate ```bash ng g c fleet/vehicle-preview ``` username_0: ### vehicle-preview.component.html ```html <ul *ngIf="vehicle"> <li>{{vehicle.model}}</li> <li>{{vehicle.make}} <button (click)="selectThisVehicle()" class="btn btn-info">show me more</button></li> </ul> ``` username_0: ### vehicle-list.component.html ```html <div *ngIf="vehicles"> <vehicle-preview *ngFor="let vehicle of vehicles" [vehicle]="vehicle" (vehicleselected)="selectVehicle($event)"></vehicle-preview> </div> ``` username_0: ### vehicle-list.component.ts ```typescript selectVehicle(vehicle) { console.log(vehicle); } ``` username_0: [NEXT](https://github.com/username_0/ng-essential-workshop/issues/13)
SerenityOS/serenity
679854111
Title: LIBC_PROFILE_MALLOC=1 causes kernel to assert Question: username_0: 1. `export LIBC_PROFILE_MALLOC=1` (in a terminal in serenity) 2. Run any command that mallocs Expected: Command creates a json file called perfcore.XXX in the current directory. Actual: `[disasm(23:23)]: ASSERTION FAILED: Kernel::is_user_range(VirtualAddress(src_ptr), n)` in `[disasm(23:23)]: 0xc015dc7a Kernel::PerformanceEventBuffer::append(int, unsigned int, unsigned int) +230` That line was touched recently(ish) in16783bd14d5284542205a50c441562c19174f101 so maybe that's related (@username_1). Answers: username_0: 16783bd looks unrelated on second look. username_1: The problem is this: https://github.com/SerenityOS/serenity/blob/b0aa8115c2c1d240edc75bbd5263f7dc2ca3d28b/Kernel/PerformanceEventBuffer.cpp#L71 The second argument to `copy_from_user` (source pointer) is a kernel pointer, causing the assertion to fail. While I did touch this line in that commit, I didn't really change that part. Not sure how this could have ever worked the way it is right now. Not sure what the code is meant to do, but if it's supposed to copy the code where eip points to then it probably should look like this: ``` copy_from_user(&eip, (FlatPtr*)current_thread->get_register_dump_from_stack().eip); ``` Not sure if that's what is supposed to be done, but it solves the `ASSERT` being triggered. username_2: `copy_from_user` used to not assert. So it was just a kernel-to-kernel copy all along, with SMAP disabled lol. Status: Issue closed
bootstrap-vue/bootstrap-vue
590449418
Title: Planning Bootstrap 5 Version of Bootstrap Question: username_0: What are the plans, is that on the roadmap? https://github.com/twbs/bootstrap/projects/11 Answers: username_1: Yep it is on the roadmap.... Although it will drop IE11 support. username_2: @username_1, bootstrap dropping IE11 support in v5 was definitely a shocker since many devs still have to support IE11 for the foreseeable future. Vue3 still plans to support IE11 via compat build. What are your thoughts on the direction and future of bootstrap-vue as it relates to Vue3 and IE11? Do you plan on releasing a bootstrap-vue major version for Vue3+bootstrap4? Once bootstrap5 is stable, will bootstrap-vue drop support for IE11 entirely, or will it atleast apply security/bug fixes to a boostrap4 version for a set period of time? Sorry for all the questions, just curious your general thoughts on the subject. username_1: There will be a BootstrapVue still based on Bootstrap v4 , but will be based on Vue 2.x (and tweaked for support on Vue 3, depending on how they deal with the events being attributes changes). BootstrapVue for Bootstrap v5 will most likely be developed for Vue 3 only, and will have a different browslistrc which should make for smaller transpiled code bundles since the majority or modern browsers support many ES6 features. We will probably keep producing both Bootstrap V4 and V5 versions for a while. This is just a rough idea of our plan.... which still in the early stages. BootstrapVue for Bootstrap v4 will stay at BootstrapVue v2.x.x, while BootstrapVue for Bootstrap v5 will be released as BootstrapVue v3.x.x And now that we have our own domain, we will be able to have multiple docs sites available for each major version. username_3: Based on what data? We're happy to drop all IE (non-Chromium) support. There's always bootstrap-vue 2.x if you need IE11. username_2: I dont see the point of your comment months after an answer to my question was provided, but here is some data for you. https://twitter.com/youyuxi/status/1277605068947312641?s=19 username_0: If you need IE, there is Bootstrap 4. If we continue to tie upgrades to legacy tech, it will limit the features of what is next. Plus, unless a corporation has enterprise paid support contracts... IE is a security risk. It no longer gets updates for non-contract users. username_4: Closed in favor of #5507. Status: Issue closed
worldclub-sokuniv/xeory_base_child
638300886
Title: スマフォ画面におけるスタイリング Question: username_0: ## 【環境】 ## 【ブラウザ / 端末】 スマフォ ## 【URL(どのページか)】 全部 ## 【内容 / 条件】 スマフォで読みやすくするためのスタイリングを行う ## 【改善案】 まずは、スマフォ画面に対応しなければいけないページのリストを作成し、それぞれの画面に対して、issue を作成し修正する。 Answers: username_0: - [ ] インタビュー記事 ここにさらに追加してください。作成した issue からこの issue をメンションしてください。 username_0: @yuu772 一旦、スマフォ画面に対応しなければいけないページのリストを作成までお願いします。
rails/jbuilder
13376087
Title: Nested partials slow Question: username_0: The following code: ~~~ json.resources do asset.resource_sets.each do |set| resource = set.resource json.set! set.key do json.partial!("resources/base", :resource => resource) json.children do for child in resource.children json.set! (child.name || child.id) do json.partial!("resources/base", :resource => child) end end end end end end ~~~ with ~~~ json.(resource, :url, :id, :name, :parent_id) json.(resource, :created_at, :updated_at) json.(resource, :uploading, :encoding, :encoding_progress, :encoding_job_count) json.(resource, :width, :height, :content_length, :duration) json.(resource.resource_type, :kind, :extension, :content_type) json.type resource.resource_type.name ~~~ in resources/_base.jbuilder is nearly twice slower than: ~~~ json.resources do asset.resource_sets.each do |set| resource = set.resource json.set! set.key do json.(resource, :url, :id, :name, :parent_id) json.(resource, :created_at, :updated_at) json.(resource, :uploading, :encoding, :encoding_progress, :encoding_job_count) json.(resource, :width, :height, :content_length, :duration) json.(resource.resource_type, :kind, :extension, :content_type) json.type resource.resource_type.name json.children do for child in resource.children json.set! (child.name || child.id) do json.(child, :url, :id, :name, :parent_id) json.(child, :created_at, :updated_at) json.(child, :uploading, :encoding, :encoding_progress, :encoding_job_count) json.(child, :width, :height, :content_length, :duration) json.(child.resource_type, :kind, :extension, :content_type) json.type child.resource_type.name end end end end end end ~~~ Using partials: Completed 200 OK in 1818ms (Views: 1554.0ms | ActiveRecord: 156.7ms) Without partials: Completed 200 OK in 1078ms (Views: 820.2ms | ActiveRecord: 152.1ms) Is there anything I can do to improve performances? Answers: username_1: username_0, sorry for raising this issue from the dead, but what configuration change was necessary to prevent the partial from being 're-interpreted' each time? username_0: The rails defaults for production environment should do the trick. I have not tested every single configuration.
vkamiansky/composite
295605353
Title: Create sequence projection functions toPartitioned, toPaged, toBatched Question: username_0: Create functions, write tests. Answers: username_0: As part of resolving this issue the initial implementations of the abovementioned functions are to be put in place. Later, as part of resolving #6 the signatures of functions may be changed. username_0: Functions iplemented, simple tests written. Comprehensive testing, args exception testing will be included is this task. Status: Issue closed
vercel/pkg
910675332
Title: DEBUG_PKG does not work when project is not on drive C: on Windows Question: username_0: I get this error after I follow the instructions [here](https://github.com/vercel/pkg#advanced) ``` D:\project\server_exe>server-win.exe ------------------------------- virtual file system C:\snapshot pkg/prelude/bootstrap.js:322 if (error) throw error; ^ Error: Directory 'C:\**\' was not included into executable at compilation stage. Please recompile adding it as asset or script. at error_ENOENT (pkg/prelude/bootstrap.js:539:17) at readdirFromSnapshot (pkg/prelude/bootstrap.js:1057:29) at Object.readdirSync (pkg/prelude/bootstrap.js:1083:19) at dumpLevel (pkg/prelude/bootstrap.js:2066:18) at installDiagnostic (pkg/prelude/bootstrap.js:2111:23) at pkg/prelude/bootstrap.js:2142:3 at pkg/prelude/bootstrap.js:2144:3 at readPrelude (internal/bootstrap/pkg.js:31:12) at internal/bootstrap/pkg.js:36:18 at internal/bootstrap/pkg.js:43:4 { errno: -4058, code: 'ENOENT', path: 'C:\\snapshot', pkg: true } ``` It works fine when I copy the same project to C: Answers: username_1: @username_0 can you provide a minimalist test that reproduces the behavior ? I try compiling a simple test script and move it to a different drive but I cannot reproduce it. username_0: [ { "nodeRange": "node12", "platform": "linux", "arch": "x64", "output": "D:\\testpkg\\test-linux", "forceBuild": false, "fabricator": { "nodeRange": "node12", "platform": "win", "arch": "x64", "binaryPath": "C:\\Users\\florent\\.pkg-cache\\v3.1\\fetched-v12.22.1-win-x64" }, "binaryPath": "C:\\Users\\florent\\.pkg-cache\\v3.1\\fetched-v12.22.1-linux-x64" }, { "nodeRange": "node12", "platform": "macos", "arch": "x64", "output": "D:\\testpkg\\test-macos", "forceBuild": false, "fabricator": { "nodeRange": "node12", "platform": "win", "arch": "x64", "binaryPath": "C:\\Users\\florent\\.pkg-cache\\v3.1\\fetched-v12.22.1-win-x64" }, "binaryPath": "C:\\Users\\florent\\.pkg-cache\\v3.1\\fetched-v12.22.1-macos-x64" }, { "nodeRange": "node12", "platform": "win", "arch": "x64", "output": "D:\\testpkg\\test-win.exe", "forceBuild": false, "fabricator": { "nodeRange": "node12", "platform": "win", "arch": "x64", "binaryPath": "C:\\Users\\florent\\.pkg-cache\\v3.1\\fetched-v12.22.1-win-x64" }, "binaryPath": "C:\\Users\\florent\\.pkg-cache\\v3.1\\fetched-v12.22.1-win-x64" } ] D:\testpkg>test-win.exe hello D:\testpkg>set DEBUG_PKG=1 D:\testpkg>test-win.exe ------------------------------- virtual file system C:\snapshot pkg/prelude/bootstrap.js:322 if (error) throw error; ^ Error: Directory 'C:\**\' was not included into executable at compilation stage. Please recompile adding it as asset or script. at error_ENOENT (pkg/prelude/bootstrap.js:539:17) at readdirFromSnapshot (pkg/prelude/bootstrap.js:1057:29) at Object.readdirSync (pkg/prelude/bootstrap.js:1083:19) at dumpLevel (pkg/prelude/bootstrap.js:2066:18) at installDiagnostic (pkg/prelude/bootstrap.js:2111:23) at pkg/prelude/bootstrap.js:2142:3 at pkg/prelude/bootstrap.js:2144:3 at readPrelude (internal/bootstrap/pkg.js:31:12) at internal/bootstrap/pkg.js:36:18 at internal/bootstrap/pkg.js:43:4 { errno: -4058, code: 'ENOENT', path: 'C:\\snapshot', pkg: true } D:\testpkg> ``` username_1: fixed in 5.3.0 Status: Issue closed username_2: same problem in pkg version 5.3.1 Error: File or directory 'Z:\**\' was not included into executable at compilation stage. Please recompile adding it as asset or script. at error_ENOENT (pkg/prelude/bootstrap.js:557:19) at findNativeAddonForStat (pkg/prelude/bootstrap.js:1271:32) at statFromSnapshot (pkg/prelude/bootstrap.js:1294:25) at Object.statSync (pkg/prelude/bootstrap.js:1308:12) at C:\snapshot\test\local-space.service.js at Array.forEach (<anonymous>) at LocalSpaceService.getLocalFSInfo (C:\snapshot\test\local-space.service.js) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async LocalSpaceController.fun (C:\snapshot\test\local-space.controller.js) { errno: -4058, code: 'ENOENT', path: 'Z:\\snapshot', pkg: true }
jbytecode/LinRegOutliers
721613366
Title: <NAME> (2005) Question: username_0: This paper suggests an outlier detection algorithm based on regression diagnostics and relatively easy to implement. `` <NAME> (2005) Identifying multiple influential observations in linear regression, Journal of Applied Statistics, 32:9, 929-946, DOI: 10.1080/02664760500163599 `` I can assign this for any of our friends who is interested in. Answers: username_0: I implemented this without any integration, documentation or tests. I changed the algorithm by replacing LMS with LTS because it has nice properties over it. Algorithm reports nice results by this change. After integration, I will close this issue. Informing all friends who contributes. Status: Issue closed
NCEAS/recordr
114883281
Title: Add ability to 'prune' a run Question: username_0: A recordr run may include files and relationships that the use may wish to remove from the run. Add the ability to remove objects from the recorded information / file archive (if needed) and have recordr remove / repair the provenance graph to reflect these removals. This needs to be entered in the Run Manager API Design Document first and approved by all those interested. Answers: username_1: Duplicate of #70 Status: Issue closed
stan-dev/pystan
228399860
Title: Check code style of pull requests automatically Question: username_0: A minimal set of coding conventions (PEP8 + Google Style) should be checked automatically. Things like: - 4 space indents - 120 line max - snake_case, capitalized class names - alphabetic imports I hope this is not difficult to setup. If anyone has some examples of this in the wild (esp if using travis), please speak up. Answers: username_1: We do this in math, stan, and cmdstan with cpplint which works great. > username_2: Hi @username_0, I would suggest using `flake8` (a tool combining `pep8` and `pyflakes`). I'm familiar with it thanks to the `yt` project (see https://github.com/yt-project/yt/blob/master/CONTRIBUTING.rst#automatically-checking-code-style). We can configure which PEP8 errors and warnings we want to detect vs ignore. I'm happy to work on a pull request, but I would need some guidance to overcome this: https://github.com/stan-dev/pystan/blob/ea186baac916dc0f9fbf56ffa92a51f521831587/.travis.yml#L3 I would be a lot more comfortable if it said `language: python`! Best, Marianne PS: Looping in @jgabry. username_0: Great point. flake8 is wonderful. We should figure out a way to run it. I wonder if we can run it in such a way that it only tests new code. I'm sure the existing codebase is not entirely compliant with all of flake8's checks. As for flake8 options, I tend to use: ``--ignore=H238,H304 --max-line-length=120 --max-complexity=12`` (H238,H304 do not apply to Python 3, IIRC) username_2: Indeed, there are many flake8 violations in the current codebase: ``` marianne@...:~/python/pystan(develop)$ python3.6 -m flake8 --max-line-length=120 --max-complexity=12 | wc -l 2164 ``` So we probably want to check only new code, which calls for [flake8 --diff](http://flake8.readthedocs.io/en/latest/user/options.html#cmdoption-flake8-diff). We could do something along the lines of https://github.com/scikit-learn/scikit-learn/pull/7127; what do you think? Additionally, we should tell contributors (in the contribution guidelines) about `flake8 --diff` and/or the Git [pre-commit](https://consideratecode.com/2016/10/15/check-code-changes-with-flake8-before-committing/) hook. username_0: @username_2 Sounds like a plan. If you, or anyone else, is interested in testing things out it might be easiest to start with stan-dev/pystan-next because the tests will finish about 10x faster. username_2: Great! I have forked stan-dev/pystan-next and opened an issue, as you can see. @username_0 @bob-carpenter @jgabry By the way, I would like to suggest that we use `master` as the base branch in stan-dev/pystan-next (as opposed to `develop` in stan-dev/pystan), so that we mesh better with the SciPy ecosystem. username_2: @username_0 Thank you for drawing my attention to stan-dev/httpstan. I wonder where it fits exactly in this picture https://github.com/stan-dev/stan/wiki/Where-do-I-create-a-new-issue#overview (I guess, close to `cmdstan`)... @jgabry Indeed, it looks like httpstan's CI supports style checking with flake8, but I thought you wanted to focus on style checking for new/submitted code *only*; so we still have to go through the 'trouble' of calling `flake8 --diff` and more complex machinery... Right? username_0: There's some background that I should mention. Both RStan and PyStan are inching towards new user-facing APIs (dubbed RStan 3 and PyStan 3). The version of PyStan living inside of pystan-next (and its dependency, httpstan) is the prototype of PyStan 3. I've enabled (or will enable shortly) mypy and flake8 checking on this new thing (both pystan-next and httpstan). My initial plan was to forget about flake8 on the old pystan and just focus on PyStan 3. Adding incremental flake8 checking to PyStan 2 would be valuable of course. I suppose there are separate issues here. PyStan 2 (stan-dev/pystan) needs incremental checking if it needs any checking at all. PyStan 3 (stan-dev/pystan-next) needs to be checked to see if it's doing the mypy and flake8 checks correctly (once I add .travis.yml). Status: Issue closed
FPtje/DarkRP
757506106
Title: Stunstick does too much damage to entities Question: username_0: https://github.com/username_1/DarkRP/blob/bb531405cf82120de7cedc54b5cda5cc6a7a550f/entities/weapons/stunstick/shared.lua#L163 The stunstick should not assume all entities are illegal and immediately 1 tap them. If you've created an entity that players need to gradually destroy one guy could just come in with a stunstick and destroy it instantly, regardless of if it's illegal or not. Status: Issue closed Answers: username_1: One tapping those entities was the point of the stunstick. I see no way to change it without pissing a lot of people off. username_2: Add a configuration variable to the stunstick instead of hardcoding the damage. It should also scale to the entity's health since it will not instant kill an entity with more than 1000 damage. username_1: https://github.com/username_1/DarkRP/blob/bb531405cf82120de7cedc54b5cda5cc6a7a550f/entities/weapons/stunstick/shared.lua#L163 The stunstick should not assume all entities are illegal and immediately 1 tap them. If you've created an entity that players need to gradually destroy one guy could just come in with a stunstick and destroy it instantly regardless of if it's illegal or not. username_1: That I can do I suppose username_0: Config options? 0 = Normal damage 1 = Maximum damage 2 = Maximum damage if ent.SeizeReward username_1: Just a number for the damage will do I suppose username_2: A negative number should indicate the entity's total health, and perhaps function keys should be allowed. Ex. ``` SWEP.Damage = function(ent) return ent:Health() / 2 end ``` username_3: if SeizeReward exists, do bad damage, if not, do normal damage, easy solution. Status: Issue closed
Antriel/phaser-ts2hx
229708079
Title: phaser.PhaserTextStyle should be a typedef instead of interface Question: username_0: Not sure how you are reading this in, but phaser.PhaserTextStyle should be a typedef instead of an interface. There may be other cases like this. Answers: username_1: I didn't yet see how it behaves in usage with Haxe, but I was expecting this change. I will keep this in mind when I get back to work on this. username_2: Any news on this? username_2: Great :+1:
metaps/genova
709973086
Title: AutoDeploy の機能が配列で指定できるが複数指定できない Question: username_0: https://github.com/metaps/genova/wiki/GitHub-push-detect-deploy yaml は配列で設定できるが、複数指定しても一番上のもののみdeployが走る。 Answers: username_1: こちら再現出来ず。yamlのサンプルをください。 username_0: ``` auto_deploy: - branch: release cluster: staging-app service: backend - branch: release cluster: staging-app service: worker ``` この yaml だと backend のみしか動きませんでした。 backend と worker 逆にすると worker のみ自動でデプロイされました。 username_1: `services` パラメータを追加しました。今後 `service` は非推奨となります。 https://github.com/metaps/genova/wiki/GitHub-push-detect-deploy ``` auto_deploy: - branch: release cluster: staging-app service: - backend - worker ``` username_1: https://github.com/metaps/genova/releases/tag/v3.0.4 Status: Issue closed
6pac/SlickGrid
446503013
Title: Cancel opening detail panel on click Question: username_0: Following this example: http://6pac.github.io/SlickGrid/examples/example16-row-detail.html Is there a way to cancel the open action after the row is clicked? Or can I toggle if the plugin is active or not on the fly? I want to have one panel open as a maximum and force the user to click [Save] or [Cancel] in the detail panel before a new one can be opened. Answers: username_1: Again all the documentation is directly inside the [Row Detail Plugin](https://github.com/6pac/SlickGrid/blob/master/plugins/slick.rowdetailview.js), I assume you can do what you want with the few events available and then stop the event from bubbling up username_0: As far as I can tell this isn't possible since returning false inside these events only avoids the rest of the code below it to be ran, panels are still opening no matter what I do inside these evemt listeners, hence this question :) username_2: `expandedClass` Think this is the options your looking for though this may be one div deeper than you need. Alternatively you could after the detailView is opened get from the returned data in the function the `_parent` which would allow you to get the row that you need to put the class on. Hope this helps username_2: You could use the expandableOverride for stopping others opening e.g. You have a bool that you update to say if one is open and update the override to hide all the others while it's open. Not the best way to do it but this is just one way I can think of doing it. As for stopping the collapse happening removing the `detailView-toggle` from the item while it's being edited should stop it being able to be collapsed. (Not sure what other issues this would cause tho) Hope this at least gives you somewhere to start looking
GrahamCampbell/Laravel-GitLab
433247761
Title: Retreiwving Projects Question: username_0: Hi, I apologise if this is a dumb question! I call am using this code ``` <?php namespace App\Http\Controllers\API\Integrations\GitLab; use Illuminate\Http\Request; use App\Http\Controllers\Controller; use username_2\GitLab\Facades\GitLab; use username_2\GitLab\GitLabManager; class GitLabsController extends Controller { function __construct() { } public function index(){ $projects = GitLab::Projects(); return response()->json($projects); } } ``` in my controller (I have put my access token in the config settings) but its returning an empty array, Can you advise what I am doing wrong? Answers: username_1: You have to complete the request, it should be: ```php $projects = GitLab::Projects()->all(); ``` Also, you could add a where() and then get(). See: https://laravel.com/docs/5.8/eloquent#retrieving-models username_2: GitLab is not an eloquent model. username_2: The GitLab facade calls operates on an instance of `Gitlab\Client`. Please take a look at the code for that to see what is callable from there. Status: Issue closed username_2: There are also some examples in the readme.
PoojaMittal2842/Delhi-Tourism
849855047
Title: Link all CSS , JS, Images according to the backend structure Question: username_0: Due to the backend structure, the UI of the website is not visible, this is generally due to improper linking of css,js,images to the code. Make changes to the website! Answers: username_1: I would like to work on this issue. @username_0 Please assign me this. username_2: Sir, the images, CSS, and js of all the pages are working fine. Just the pages of the Trash folder have not been updated. Rest all other pages are showing the images and the respective CSS styles. The error will be there if the npm packages have not been installed. For that just type, 'npm install' in the terminal when you are in the parent directory. username_2: Only the Trash folder pages are not updated as they don't need to be used in the future. That's why those pages are in the trash. username_3: @username_0 Can I work on this issue.
FIX94/Nintendont
307816276
Title: Gamepad GC for WiiU (Wired Fight Pad) always as controller 2 (player two) - Classic Wii Question: username_0: Hello! When connect the controller in Wii remote, running any GC game in Nintendont, the controller never function as "player one". If the physical controller GC pad is connect, the controller GC for WiiU is changing the LED order. Even if you do not connect another physical controller on port 1, it is recognized as controller 2. Thank you Answers: username_1: that is how it works there are priorites system real gc ports on wii with bc >wiiugamepad>hid controllers>wireless controllers. username_0: All right! I understand it, but then isn't possible to play as "player one" playing with "wireless mode" using GC controller wiiU? I wish play Zelda WindWaker in this mode :) username_1: to use classic controller as player one simply say no to use the wiiu gamepad if using an inject and dont connect any hid controllers to the usb ports, that way wireless controllers will be player one. Status: Issue closed username_0: Ohhhh!!!! Thank you Very much!!!!!
gbif/ipt
96250761
Title: Allow IPT to be installed without an organization Question: username_0: The IPT should not need an organization to be installed. Several people have come forward complaining about this, and have been using it to generate archives for exchange within networks but without any intention or need to publish to GBIF. Most recently the iDigBio folks. The option to use the test mode is not ideal, as it both puts "TEST MODE" on the site which is a nuisance and will not use production level standards and vocabularies. Answers: username_1: VertNet uses the IPT this way as well for our own internal purposes when we need to break resources apart for just Verts (like Smithsonian) or to publish post-migration on resources that came from self-hosted institutions or from non-IPT generated DwCA (like OZCAM resources). We have not upgraded yet, but we currently have 29 resources that we harvest for VertNet this way. username_2: Hi All, Tim, this also poses a problem for us at iDigBio when we want to do a workshop and give people Test Mode instances. Do we also need to "register" 30 instances of test mode versions, for a workshop? username_3: +1 username_1: In answer to Deb's question, the 2.2.1 in Test Mode also requires you to register the IPT. I was working with DataOne and Colorado State last week. They called because they wanted to evaluate IPT and were having difficulty because they couldn't do anything without registering. I ended up helping them to install 2.1.1 in test mode so they could evaluate the parts and I explained what features/differences are in 2.2.1. username_4: This new feature in 2.2.1 is impeding our progress. Could it be assigned to someone to work on soon? username_0: Thanks everyone for the feedback. @username_6 - please can you make sure the IPT does not require an organization to install in either test or production mode for the imminent 2.3 release. To install an IPT should not require any organization - in production or test mode. username_0: Looking through comms New Zealand (Land care) also have asked for this to be removed username_5: Once implemented, we'll have to test this for the new [DCAT feed feature](https://github.com/gbif/ipt/pull/1185). I think the code will still work fine, even without a registered organization, but the feed won't be valid anymore. Note that this is also the case if no datasets have been published yet, so this is not without precedent. @simon-vc @amenai username_6: This issue has been fixed. Installing IPT 2.3 will no longer require any organisation - in production or test mode. Please note that in the meantime, IPT admins can work around this requirement by registering their test IPTs against "Test Organisation # 1" with password=<PASSWORD>. The reason for requiring the IPT to be installed with an organisation, was that the publishing organisation became a mandatory metadata field in IPT 2.2. In addition to being required for registration with GBIF, the publishing organisation is mandatory for minting DOIs with DataCite/EZID, and enables auto-generating a resource citation. Anyways, this requirement will be dropped in IPT 2.3 by simply making it possible to publish a resource without any publishing organisation. Of course this means users can't register their dataset with GBIF, and a warning explaining this consequence will be shown to users. I should also point out, that an IPT in test mode gets registered in the GBIF Sandbox Registry (not the live GBIF Registry), allowing its registered resources to be indexed into the GBIF Sandbox/UAT Portal available online at www.gbif-uat.org. This allows trainees at workshops to experience the complete publishing lifecycle. No doubt there is a huge amount of work required to setup up a room of IPTs for a workshop, so hopefully more progress can be made on automating IPT deployments in the future. @username_5 thanks for the reminder, we will test this new functionality thoroughly after merging this pull request in. username_2: Looking forward to trying it out. Thanks @username_6 Status: Issue closed username_6: Great to hear @username_2 - I look forward to your feedback on this issue. Now that all work (including translations) has been completed, I'm closing the issue. username_2: @username_6 I'm going to be testing the newer RC release where the missing drop-down menus issue has been fixed.
ucbtrans/opt
725879525
Title: Flow dropping to 0 Question: username_0: Flow dropping to 0 despite having demand. ![image](https://user-images.githubusercontent.com/60490284/96634359-3c0dce00-12cf-11eb-977b-b4851db71478.png) Answers: username_1: @username_0 Please see my comment on issue 202 username_0: even if i make it 1.8 or even higher this does not fix the problem @username_1 username_1: new OTM
kyoussef77/StatisticsCalculatorTeam
578803261
Title: Add Division to Calculator Question: username_0: - [ ] Create Division function(division.py) in MathOperations folder - [ ] Import function into MathOperations.py in the same folder - [ ] Import MathOperations into Calculator/calculator.py and create a function for the division that stores the result - [ ] Create Unit Test for division in Tests/test_MathOperations.py with data below: Input[15,5] result[3]<issue_closed> Status: Issue closed
hyb1996-guest/AutoJsIssueReport
273163237
Title: [163]java.lang.StringIndexOutOfBoundsException: length=289; regionStart=1; regionLength=-1 Question: username_0: Description: --- java.lang.StringIndexOutOfBoundsException: length=289; regionStart=1; regionLength=-1 at java.lang.String.substring(String.java:1931) at com.stardust.autojs.script.JavaScriptSource.parseExecutionMode(JavaScriptSource.java:69) at com.stardust.autojs.script.JavaScriptSource.getExecutionMode(JavaScriptSource.java:58) at com.stardust.autojs.ScriptEngineService.execute(ScriptEngineService.java:127) at com.stardust.autojs.ScriptEngineService.execute(ScriptEngineService.java:142) at com.stardust.scriptdroid.script.Scripts.runWithBroadcastSender(Scripts.java:96) at com.stardust.scriptdroid.ui.edit.EditActivity.run(EditActivity.java:215) at com.stardust.scriptdroid.ui.edit.EditActivity.access$100(EditActivity.java:56) at com.stardust.scriptdroid.ui.edit.EditActivity$2.onSaved(EditActivity.java:196) at com.jecelyin.editor.v2.task.SaveTask$1.onSuccess(SaveTask.java:112) at com.jecelyin.editor.v2.io.FileWriter.onPostExecute(FileWriter.java:126) at com.jecelyin.editor.v2.io.FileWriter.onPostExecute(FileWriter.java:37) at android.os.AsyncTask.finish(AsyncTask.java:667) at android.os.AsyncTask.-wrap1(AsyncTask.java) at android.os.AsyncTask$InternalHandler.handleMessage(AsyncTask.java:684) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:163) at android.app.ActivityThread.main(ActivityThread.java:6356) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:901) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:791) Device info: --- <table> <tr><td>App version</td><td>2.0.16 Beta2.1</td></tr> <tr><td>App version code</td><td>163</td></tr> <tr><td>Android build version</td><td>7.11.9</td></tr> <tr><td>Android release version</td><td>7.1.1</td></tr> <tr><td>Android SDK version</td><td>25</td></tr> <tr><td>Android build ID</td><td>NMF26F</td></tr> <tr><td>Device brand</td><td>Xiaomi</td></tr> <tr><td>Device manufacturer</td><td>Xiaomi</td></tr> <tr><td>Device name</td><td>oxygen</td></tr> <tr><td>Device model</td><td>MI MAX 2</td></tr> <tr><td>Device product name</td><td>oxygen</td></tr> <tr><td>Device hardware name</td><td>qcom</td></tr> <tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr> </table>
softwaremill/sttp-model
566029680
Title: Sttp model does not describe how to match different responses to different models Question: username_0: Usually in REST applications one endpoint provide multiple models for response, generally at least one model for successful response and one for error. It could be very useful to able to describe every expected answer and match it to client model. One of the cases to use is able to create autogenerated clients. Here is example how model describe answers in OpenApi https://github.com/OpenAPITools/openapi-generator/blob/master/samples/client/petstore/scala-akka/src/main/scala/org/openapitools/client/api/PetApi.scala#L38 Answers: username_1: @username_0 This is currently done in sttp-client: https://sttp.readthedocs.io/en/latest/responses/body.html. In sttp-client, the request contains a specification of what to do with the response; this usually is an `Either[X, Y]`, where the left describes the error case, and the right the success case. For generating servers and documentation, this is also covered by sttp-tapir: https://tapir-scala.readthedocs.io/en/latest/endpoint/basics.html. There, the endpoint description contains information on how to map responses to errors and success values. Would you somehow see a new abstraction in sttp-model? What kind of use-cases would it cover? username_0: @username_1 my concern about request model is to be more self-described. sttp-client cover only success/unsuccess case, but, for example, i can specify endpoint with the next expected mappings: ``` POST /create/something 200 -> SomethingModel 400 -> TechnicalProblemsModel, e.g. field "name" is missed 409 -> ValidationErrorsModel, e.g. Something with "name" already exists 422 -> ValidationErrorsModel ``` Of course, i can parse it to necessary models with additional callback on Left. But would it better if I can describe it with a model? username_1: I think this is doable right now with sttp-client. For example, you can create the following response specification: ``` sealed trait MyModel case class SomethingModel() extends MyModel case class TechnicalProblemsModel() extends MyModel case class ValidationErrorsModel() extends MyModel case class ErrorModel() extends MyModel val asMyJson: ResponseAs[Either[ResponseError[circe.Error], MyModel], Nothing] = fromMetadata { meta => meta.code match { case StatusCode.Ok => asJson[SomethingModel] case StatusCode.BadRequest => asJson[TechnicalProblemsModel] case StatusCode.Conflict => asJson[ValidationErrorsModel] case _ => asJson[ErrorModel] } } ``` is this something as you had in mind? username_0: @username_1 Yes, exact that username_1: @username_0 great, so - is what is currently available in sttp-client enough for your needs, or do you have a use-case that's not covered? Maybe you have a good idea on how some of this could be moved to sttp-model :) username_0: @username_1 I tried next example ``` libraryDependencies ++= Seq( "com.softwaremill.sttp.client" %% "core" % "2.0.0-RC10", "com.softwaremill.sttp.client" %% "circe" % "2.0.0-RC10", "io.circe" %% "circe-generic" % "0.12.3" ) ``` ``` import sttp.client._ import sttp.model.StatusCode import sttp.client.circe._ import io.circe.generic.auto._ sealed trait MyModel case class ItemModel(id: Int, node_id: String, name: String, full_name: String) extends MyModel case class ResultModel(total_count: Int, incomplete_results: Boolean, items: List[ItemModel]) extends MyModel case class ErrorModel(message: String) extends MyModel object MyApp extends App { // Model Defintion def getRepositories(query: String, sort: Option[String] = None) : RequestT[Identity,Either[ResponseError[io.circe.Error], MyModel], Nothing] = { basicRequest .get(uri"https://api.github.com/search/repositories?q=$query&sort=$sort") .response { fromMetadata { meta => meta.code match { case StatusCode.Ok => asJson[ResultModel] case StatusCode.BadRequest => asJson[ErrorModel] } } } } implicit val backend = HttpURLConnectionBackend() val response = getRepositories(query = "http language:scala").send() response.body match { case Left(ex) => throw ex case Right(model) => println(model) } } ``` It wont compile due expecting `MyModel` rather than `ResultModel/ErrorModel`. May be i did something wrong here. But what I see here is that `asJson` method (and similar to it methods) can be extracted with some base trait `SttpSerializerApi` into the model. Right now i have to specify serializer implementation first, but actually i want to define model free from actual client and serializer implementation and define it separately. For example above it could be transformed into something like this: ``` class RepositoryApi(implicit serializer: sttp.model.SerializerApi) { def getRepositories(query: String, sort: Option[String] = None) : RequestT[Identity,Either[ResponseError[io.circe.Error], MyModel], Nothing] = { basicRequest .get(uri"https://api.github.com/search/repositories?q=$query&sort=$sort") .response { fromMetadata { meta => meta.code match { case StatusCode.Ok => serializer.asJson[ResultModel] case StatusCode.BadRequest => serializer.asJson[ErrorModel] } } } } ``` username_1: @username_0 ah yes, that' beacuse `ResponseAs` was not covariant (see https://github.com/softwaremill/sttp/issues/408). This is fixed in sttp-client 2.0.0-RC11. Maybe you can try that? username_1: And yes, you can abstract over the serialization specifics. In the end, you need to provide a `ResponseAs[T, S]` value, which describes what to do with the response. How this value is created is arbitrary. username_0: @username_1 Thanks, code fixed with `RC11`. But still can't come up with an abstraction. Could you help to provide example? If it possible to do, then, probably don't need to extend models and client functionality could be enough. username_1: this compiles just fine: ``` import sttp.client._ import sttp.model.StatusCode object Test extends App { sealed trait MyModel case class SuccessModel() extends MyModel case class ErrorModel() extends MyModel trait SerializerApi { def asJson[T]: ResponseAs[Either[ResponseError[io.circe.Error], T], Nothing] } class RepositoryApi(serializer: SerializerApi) { def getRepositories( query: String, sort: Option[String] = None ): RequestT[Identity, Either[ResponseError[io.circe.Error], MyModel], Nothing] = { basicRequest .get(uri"https://api.github.com/search/repositories?q=$query&sort=$sort") .response { fromMetadata { meta => meta.code match { case StatusCode.Ok => serializer.asJson[SuccessModel] case StatusCode.BadRequest => serializer.asJson[ErrorModel] } } } } } } ``` direct modification of your example :) username_0: @username_1 I stuck with same solution :) It want `Decoder[T]` to be passed when i trying to use implementation, but trait don't have it ``` trait SerializerApi { def asJson[T]: ResponseAs[Either[ResponseError[io.circe.Error], T], Nothing] } ``` I tried next approach: ``` import io.circe.generic.AutoDerivation import sttp.client._ import sttp.client.circe.SttpCirceApi import sttp.model.StatusCode object Test extends App { sealed trait MyModel case class SuccessModel() extends MyModel case class ErrorModel() extends MyModel trait SerializerApi[DECODER[_]] { def asJson[T: DECODER : IsOption]: ResponseAs[Either[ResponseError[io.circe.Error], T], Nothing] } class RepositoryApi[DECODER](serializer: SerializerApi[DECODER]) { import serializer._ // import autodecoders def getRepositories( query: String, sort: Option[String] = None ): RequestT[Identity, Either[ResponseError[io.circe.Error], MyModel], Nothing] = { basicRequest .get(uri"https://api.github.com/search/repositories?q=$query&sort=$sort") .response { fromMetadata { meta => meta.code match { case StatusCode.Ok => serializer.asJson[SuccessModel] case StatusCode.BadRequest => serializer.asJson[ErrorModel] } } } } } class CirceSerializer extends SerializerApi[io.circe.Decoder] with SttpCirceApi with AutoDerivation val serializer = new CirceSerializer val api = new RepositoryApi(serializer) implicit val backend = HttpURLConnectionBackend() val response = api.getRepositories(query = "http language:scala").send() } ``` but it wont compile also due implicit decoders not provided in trait But it works fine if I change it to specific implementation ``` class RepositoryApi(serializer: CirceSerializer) { ``` Seems issue related to client, not to model. Should i open new issue in client for that? --- Back to topic i see tapir use `jsonBody` to specify different code responses ( https://tapir-scala.readthedocs.io/en/latest/endpoint/statuscodes.html#dynamic-status-codes) . I think it make sense to have `jsonBody` in `sttp-model` as common dec/enc implementation free abstraction. So example above even related to client still make sense here username_1: Ah you want to abstract json generation ... the problem is that json codec derivation with circe is a compile-time mechanism, to in order to perform the derivation, you need to know (at compile-time) the concrete class for which to derive the codec. So at the point where you call `serializer.asJson[Something]`, you need to know what typeclass the compiler should derive. The tapir codecs might end up in a separate project (maybe here as you write), however they are more powerful than what's needed in the client: the additionally capture validation, schema, and are bi-directional. On the other hand, for the client it's enough to decode (for responses) and encode (for requests). username_0: Following generic response as type `RequestT[Identity, Either[ResponseError[Exception], T], Nothing]` what do you suggest for improve next code? I think generally it is good to return Success(model) for successful (2xx) result and Error/Exception for any other cases. And for expected error cases like not found i can have general ApiError which can be useful to separate business logic errors from transport/serialization errors. At the same time i want to have api method self-described as much as possible before i do actual call `send()`. So right now i can return ApiError[ApiResponse] as exception explicitly throwing it: ``` case class ApiResponse( code: Option[Int] = None, message: Option[String] = None) extends ApiModel case class ApiError[T](val model:T) extends Exception basicRequest .method(Method.GET, uri"$baseUrl/pet/${petId}") .response( asJson[Pet].mapWithMetadata{ case (Right(value),_) => Right(value) case (Left(ex: HttpError),responseMetadata) => responseMetadata.code match { case StatusCode.NotFound => throw new ApiError(serialization.read[ApiResponse](ex.body)) case _ => Left(ex) } ``` but ii seems is not to be correct in general as my result type is `RequestT[Identity, Either[ResponseError[Exception], T]`. Would it better to have an ApiError as part of the ResponseError? ``` sealed abstract class ResponseError[+T] extends Exception { def body: String } case class HttpError(body: String) extends ResponseError[Nothing] case class DeserializationError[T](body: String, error: T) extends ResponseError[T] case class ApiError[T](body: String, error: T) extends ResponseError[T] ``` Or my expectations is not really correct and i should operate with more generic `RequestT[Identity, Either[Exception, T], Nothing]` type? ps: it could be also useful if HttpError will have status like: `case class HttpError(code: StatusCode, body: String) extends ResponseError[Nothing]` username_1: I think it depends on what your target model is. You can either have typed errors - where at least some of the errors are represented as a case class - or untyped errors, just using `Exception`. If the latter, then the generic response type should be `RequestT[Identity, Either[ResponseError[Exception], T], Nothing]` as you write. If you do have typed errors, then you would probably need a custom hierarchy, covering three cases: 1. non-2xx response, successfully parsed http error 2. non-2xx response, unknown http error 3. unparseable 2xx body I'd represent transport errors as exceptions (or failed effects), as these are distinct from the errors described above: in that case, there's no response received at all. There's no need for `HttpError` to contain the status code as that's not part of the body, but part of the response meta-data. This is always available on the `Response` type, regardless of how the body is handled. username_1: This should be fixed in sttp 3.0, by supporting typed errors in `asJsonEither` etc. Please reopen if this would still be problematic. Status: Issue closed
bwa-mem2/bwa-mem2
461039174
Title: Wheat 17G genome build index Segmentation fault Question: username_0: Hi I want to use bwa-mem2 in wheat genome (17G) . bwa-mem2 was compiled from source, but the program had an segmentation fault when building index. ```sh bwa-mem2 index -p CS_parts_bwamem2 161010_Chinese_Spring_v1.0_pseudomolecules_parts.fasta [bwa_index] Pack FASTA... 85.80 sec init ticks = 763897684114 ref seq len = 29094523130 binary seq ticks = 497034160160 build index ticks = 21694227971367 ref_seq_len = 29094523130 count = 0, 7837288578, 14547261565, 21257234552, 29094523130 BWT[12766107114] = 4 CP_SHIFT = 5, CP_MASK = 31 sizeof CP_OCC = 64 Segmentation fault ``` My system info : Linux debian 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux ; **512G memory** By the way, bwa-mem works well. The genome file : ftp://ftp.ensemblgenomes.org/pub/plants/release-43/fasta/triticum_aestivum/dna/ Answers: username_1: We will look into this and get back to you. username_0: Hi, is there any progress on this issue? username_1: We are working on it. We have reproduced the bug. Trying to fix it now. username_1: Most likely, you are running out of memory. Creating the index using the current code needs maximum memory of 34N Bytes, where N is the size of the reference sequence. With the wheat genome, it would be 34*17 = 578 GB. We are trying to reduce the maximum memory required. username_2: Hi, I'm getting same error when indexing hg19.fa with 32GB RAM: bwa-mem2 index hg19.fa [bwa_index] Pack FASTA... 7.61 sec init ticks = 257657731164 ref seq len = 6274322528 binary seq ticks = 133066087488 Segmentation fault (core dumped) username_1: Hi, that is because the indexing of human genome requires nearly 40 GB of memory. username_1: @username_0 Fixed the code to work with wheat genome. Now, it needs 28N space = 28*17 = 476 GB to build wheat genome. Also, fixed the rest of the code so that wheat genome (and other larger genomes) can be mapped to correctly. Status: Issue closed username_0: Coooool! Thank you so much.
stripe/stripe-node
747494985
Title: Property 'checkout' does not exist on type 'typeof Stripe'. Did you mean 'Checkout'? Question: username_0: Version: 8.122 Bug: Type definition Following stripe documentation on node js. strike.checkout does not exist ``` try { const session: stripe.Checkout.Session = await stripe.checkout.sessions.create({ mode: "subscription", payment_method_types: ["card"], line_items: [ { price: priceId, quantity: 1, }, ], // {CHECKOUT_SESSION_ID} is a string literal; do not change it! // the actual Session ID is returned in the query parameter when your customer // is redirected to the success page. success_url: 'https://example.com/success.html?session_id={CHECKOUT_SESSION_ID}', cancel_url: 'https://example.com/canceled.html', }); ``` Status: Issue closed Answers: username_0: I didn't instantiate the stripe object with the credentials.
tensorflow/tensorflow
343331610
Title: Waste lots of time to redownload grpc when building with CMake Question: username_0: ### System information - **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**:No. - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:Windows 7 64 bit. - **Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device**:No. - **TensorFlow installed from (source or binary)**:source - **TensorFlow version (use command below)**:1.9 - **Python version**:3.5 - **Bazel version (if compiling from source)**:No. - **GCC/Compiler version (if compiling from source)**:VS2015 - **CUDA/cuDNN version**:CUDA 8.0 and cuDnn 7.0. - **GPU model and memory**:No. - **Exact command to reproduce**: ``` cd $TENDORFLOW_DIR/tensorflow/contrib/cmake cmake .. make -j20 ``` ### Describe the problem Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request. I want to build tensorflow from source with CMake in Windows, but it fails with the error caused by the fail of gprc. I wonder if there is a way to avoid the redownload gprc when rebuilding form source. ### Source code / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. 29> Creating directories for 'grpc' 29> Cloning into 'grpc'... 29> Performing download step (git clone) for 'grpc' 29> fatal: unable to access 'https://boringssl.googlesource.com/boringssl/': Failed to connect to boringssl.googlesource.com port 443: Timed out 29> fatal: clone of 'https://boringssl.googlesource.com/boringssl' into submodule path 'D:/CNN/tensorflow/BUILD/grpc/src/grpc/third_party/boringssl-with-bazel' failed 29> Failed to clone 'third_party/boringssl-with-bazel'. Retry scheduled 29> Failed to clone 'third_party/boringssl-with-bazel' a second time, aborting 29> CMake Error at D:/CNN/tensorflow/BUILD/grpc/tmp/grpc-gitclone.cmake:93 (message): 29> Failed to update submodules in: 'D:/CNN/tensorflow/build/grpc/src/grpc' Answers: username_1: I think that package is required. I don't know why you were having trouble downloading but I hope that the connection is working now. I will close for now assuming you were able to build, but please reopen if I have misunderstood. Status: Issue closed username_0: Yes, I know grpc is required. But what we need is incremential downloading, not redownload the whole package when failed to build the project. As you known, the grpc is more than 200M! My network is not good and every time in rebuilding it takes serval hours to download the grpc package. The url of [boringssl](https://boringssl.googlesource.com/boringssl) is not accesible under the Great Firewall of China. username_1: I see I'm sorry thanks for clarifying. @username_2 do you think this is something that would be feasible without heroic effort? username_1: ### System information - **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**:No. - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:Windows 7 64 bit. - **Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device**:No. - **TensorFlow installed from (source or binary)**:source - **TensorFlow version (use command below)**:1.9 - **Python version**:3.5 - **Bazel version (if compiling from source)**:No. - **GCC/Compiler version (if compiling from source)**:VS2015 - **CUDA/cuDNN version**:CUDA 8.0 and cuDnn 7.0. - **GPU model and memory**:No. - **Exact command to reproduce**: ``` cd $TENDORFLOW_DIR/tensorflow/contrib/cmake cmake . make -j20 ``` ### Describe the problem Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request. I want to build tensorflow from source with CMake in Windows, but it fails with the error caused by the fail of gprc. I wonder if there is a way to avoid the redownload gprc when rebuilding form source. ### Source code / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. 29> Creating directories for 'grpc' 29> Cloning into 'grpc'... 29> Performing download step (git clone) for 'grpc' 29> fatal: unable to access 'https://boringssl.googlesource.com/boringssl/': Failed to connect to boringssl.googlesource.com port 443: Timed out 29> fatal: clone of 'https://boringssl.googlesource.com/boringssl' into submodule path 'D:/CNN/tensorflow/BUILD/grpc/src/grpc/third_party/boringssl-with-bazel' failed 29> Failed to clone 'third_party/boringssl-with-bazel'. Retry scheduled 29> Failed to clone 'third_party/boringssl-with-bazel' a second time, aborting 29> CMake Error at D:/CNN/tensorflow/BUILD/grpc/tmp/grpc-gitclone.cmake:93 (message): 29> Failed to update submodules in: 'D:/CNN/tensorflow/build/grpc/src/grpc' username_2: It sounds like it should be possible, but we'll need someone with more CMake expertise to suggest a fix, so I'll mark this as "contributions welcome". As a potential lead, the `UPDATE_DISCONNECTED` option seems relevant: * https://github.com/Kitware/CMake/blob/5bbcf76399e107bbb1712ba8aeee27c160413d2d/Modules/ExternalProject.cmake#L320-L338 * https://stackoverflow.com/q/36254658/3574081 username_1: A couple of days ago the Windows build was switched over to use bazel instead of cmake. That doesn't solve your problem, but it means the best solution now is going to be a feature request to the bazel team I think. Sorry not to be more help. Status: Issue closed
mollie/mollie-api-node
1093739143
Title: implement in NextJS Question: username_0: Is it possible to integrate the client in NextJS as an api route? Answers: username_1: This is possible, yes. See [nextjs.org/docs/api-routes/introduction](https://nextjs.org/docs/api-routes/introduction). You will likely put your Mollie API key in [an environment variable](https://nextjs.org/docs/basic-features/environment-variables). Please make sure you do not accidentally expose your API key to the client (or _to the browser_, as the Next.js documentation puts it). Let us know if you run into any issues! username_0: Thx Pim, So i have filled in everything on the webdashboard to get my account approved. But still i get the response : `'ApiError: Your account is currently suspended' ` and ``` title: 'Unprocessable Entity', status: 422, field: undefined, links: { documentation: { href: 'https://docs.mollie.com/overview/handling-errors', type: 'text/html' } } } ``` Status: Issue closed username_1: Please [contact support](https://www.mollie.com/en/contact/merchants).
mini-kep/frontend-dash
271455298
Title: group selector for variable list Question: username_0: https://github.com/mini-kep/frontend-dash/blob/8c744b4f7fe8379913a95920faa8f2531e2dad55/app.py#L26 We now have 2 variables selectors. For each of these selectors I would like to have a two selectors(dropdown menus) on one line: one chooses varibale group (eg GDP components, Prices, Foreign Trade) and the other chooses a ariable from a smaller list. Need a) loyout for this on a separate branch b) data structure, c) callbacks.
Zrips/CMI
406124416
Title: [2 Bugs] Can Mine Spawners Without Req. Perm. & Flightcharge Bug Question: username_0: **Description of issue or feature request:** 1. Players that should not be able to silk blaze spawners can anyways. Permissions given: https://gyazo.com/0ea7662a3261581ab4670b3df1b7346a 2. With Flightcharge expcharge, if a player has the max amount of charges (1000 by default) and tries getting more charges (such as with /recharge expcharge 1000 then it will take exp from their current exp but not give them any additional charges because they are already at max. It should just tell them they have the max number of charges and cannot get anymore. --- **CONFIG SECTION (DELETE IF NOT RELEVANT):** ``` https://pastebin.com/KnFWNDZD ``` --- **Cmi Version (using`/cmi version`):** https://gyazo.com/7286d42e0c234079d015199598e035ab **Server Type (Spigot/Paperspigot/etc):** Spigot **Server Version (using `/ver`):** CraftBukkit version git-Spigot-b0f4c22-d5e9688 (MC: 1.13.2) (Implementing API version 1.13.2-R0.1-SNAPSHOT) Answers: username_0: Another bug related to this: players can do /flightcharge expcharge 12400 and it will take away all of their xp and set it to the 3000 max charge. username_1: 1. Will be fixed by adding new option to check by specific permission node when needed. This will solve current issue. username_1: 2. Cant really reproduce this issue currently. Will add some extra fail safes for it tho. Just in case. username_1: 3. Same thing as 2. Can't reproduce this one. All seems to be working correctly for me. But again, will add extra check for this one too, just in case. And if issue persists with next update. Give me a shout in here or on discord. Status: Issue closed
hjdhjd/homebridge-unifi-protect
1156276560
Title: Will plug-in support Protect Door Lock? Question: username_0: Will plug-in support Protect Door Lock? Answers: username_1: We do not support any EA products. This plugin, as stated in the documentation, will support - and is currently the only solution that completely supports - **all** UniFi Protect devices that are generally released. If, and when, Ubiquiti does so, we will support it.
Gizra/message_subscribe
188305601
Title: Must unsubscribe twice with message_subscribe_email enabled Question: username_0: When `message_subscribe_email` is enabled, and the subscribe/unsubscribe flag is set to use Ajax, something is happening such that the 'unsubscribe' link must be clicked twice to properly load the 'subscribe' link. This is purely aesthetic, because after one click, if the page is reloaded, the user is properly unsubscribed.<issue_closed> Status: Issue closed
rstudio/reticulate
321610079
Title: Could not find or load the Qt platform plugin "xcb" Question: username_0: I downloaded https://raw.githubusercontent.com/rstudio/reticulate/master/tests/testthat/resources/eng-reticulate-example.Rmd (changed `print` statement to `print` function) and try to convert it using knitr from Rstudio interface but I received ```` This application failed to start because it could not find or load the Qt platform plugin "xcb" in "/usr/lib/rstudio/bin/plugins/platforms". Available platform plugins are: minimal, offscreen, xcb. ```` as error. Did I missed something? # Environment - RStudio Version 1.0.153 - R version 3.5.0 (2018-04-23) - knitr 1.20 - reticulate 1.7.1 - rmarkdow 1.9 Answers: username_1: This appears to be an RStudio rather than `reticulate` issue. However, if you're planning to leverage some of the RStudio IDE features for using `reticulate` I'd recommend installing a daily build from: https://dailies.rstudio.com Or at least upgrading to the latest release of RStudio (v1.1.447): https://www.rstudio.com/products/rstudio/download/#download username_0: I have a similar problem when calling knitr from the command line and not RStudio. ```` $ Rscript -e "library(knitr); knit('eng-reticulate-example.Rmd')" processing file: eng-reticulate-example.Rmd |.. | 3% ordinary text without R code |.... | 6% label: setup (with options) List of 1 $ include: logi FALSE |...... | 9% ordinary text without R code |........ | 12% label: unnamed-chunk-1 (with options) List of 1 $ engine: chr "python" |.......... | 15% ordinary text without R code |............ | 18% label: unnamed-chunk-2 (with options) List of 1 $ engine: chr "python" |.............. | 21% ordinary text without R code |................ | 24% label: unnamed-chunk-3 (with options) List of 4 $ fig.width : num 4 $ fig.height: num 3 $ dev : chr "svg" $ engine : chr "python" This application failed to start because it could not find or load the Qt platform plugin "xcb" in "". Available platform plugins are: minimal, offscreen, xcb. Reinstalling the application may fix this problem. zsh: abort (core dumped) Rscript -e "library(knitr); knit('eng-reticulate-example.Rmd')" ```` Any advice? I try to reinstall matplotlib following reticulate instructions but it didn't work. username_1: That's very surprising! I'm not sure what could be trying to use Qt in your environment. username_0: @username_1 Thanks for the support, I believe that for some reason the environment that reticulate or RStudio created was corrupted for some reason. I executed ~~~ $ conda env remove -n r-reticulate $ conda create -n r-reticulate python=3 $ source activate r-reticulate $ python -m pip install matplotlib $ Rscript -e "library(knitr); knit('eng-reticulate-example.Rmd')" ~~~ and I had a working version of reticulate. I'm closing this for now. Let me know if you want any pull request to document this. Status: Issue closed username_1: Glad to hear you got to the bottom of this! I'm not sure we have a good place for these instructions to land, but at least having the GitHub issue in the history means that others who have similar problems with be able to stumble upon your post. Perhaps in the future we could consider making things like this part of a lightweight Q&A-style wiki page here on GitHub...
TheOdinProject/theodinproject
230171335
Title: Redesign: Sign Up page Question: username_0: This task is to implement a redesigned sign up page as shown in the mock up images below. **Important** Please ignore the solutions component at the bottom of the mock up. We will create another issue for implementing that feature when we merge in the backend work for it in the next few weeks. ### Working on this task - [ ] If you want to work on this task, please claim it by commenting below - [ ] Branch off the redesign branch - [ ] Check out our [styleguide](https://staging-odin-project.herokuapp.com/styleguide) - [ ] When you are submitting your pull request for this task please ensure the base branch you want to merge into is the redesign branch ### Sign up page page mock up ![signup](https://cloud.githubusercontent.com/assets/7963776/26278494/959b953e-3d93-11e7-812d-c4bdb97f1aa8.png) Answers: username_0: @105ron is working on this Status: Issue closed
dmiklic/psiholeks-web
319152639
Title: Switch to gunicorn for production Question: username_0: More info [here](http://flask.pocoo.org/docs/1.0/deploying/wsgi-standalone/) Answers: username_0: Some info as to why it's a bad idea to run the flask built-in server in production: https://stackoverflow.com/questions/20843486/what-are-the-limitations-of-the-flask-built-in-web-server https://vsupalov.com/flask-web-server-in-production/ As described [here](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-14-04), this should be as simple as: ``` pip install gunicorn gunicorn run:app ```
iljackb/Mixtepec_Mixtec
320317773
Title: Normalize annotations of Utterances (remove pointers to phonetic forms) Question: username_0: In order to make the annotations of the spoken language content (<u>) files with those of the text sources, it is necessary to: - remove double pointers in utterance files, point only to orthographic forms; - add `@sameAs` on phonetic transcription `<seg>`'s and `<w>`'s ` <u xml:id="d1e37" n="2" start="0" end="0.68"> <seg xml:id="d1e38" function="utterance" notation="orth" type="phrase" sameAs="#d1e41"> <w xml:id="d1e39" synch="#T1">kui'<c xml:id="d1e40">i̠</c></w> </seg> <seg xml:id="d1e41" function="utterance" notation="ipa" type="phrase" sameAs="#d1e37"> <w xml:id="d1e42" synch="#T1"> <c>k</c> <c>w</c> <c>ḭ</c> <c function="tone">H</c> <c>ʔ</c> <c xml:id="d1e45">iː</c> <c function="tone" xml:id="d1e46" synch="#d1e45">R_F</c> </w> </seg> </u>` However, a major question to be resolved is that: - this will cause a loss of information in terms of connecting tone to the grammatical & morpho-semantic info it expresses, eg the ability to point to `@d1e40` (i.e " i̠") and`@d1e45` (i.e. "iːR_F") in order to label these subsegments as **1st person singular**: `<u xml:id="d1e37" n="2" start="0" end="0.68"> <seg xml:id="d1e38" function="utterance" notation="orth" type="phrase"> <w xml:id="d1e39" synch="#T1">kui'<c xml:id="d1e40">i̠</c></w> </seg> <seg xml:id="d1e41" function="utterance" notation="ipa" type="phrase"> <w xml:id="d1e42" synch="#T1"> <c>k</c> <c>w</c> <c>ḭ</c> <c function="tone">H</c> <c>ʔ</c> <c xml:id="d1e45">iː</c> <c function="tone" xml:id="d1e46" synch="#d1e45">R_F</c> </w> </seg> </u> <spanGrp type="gram"> <span type="phrase" target="#d1e38 #d1e41" ana="#NP #POSS"/> <span type="pos" target="#d1e39 #d1e42" ana="#N"/> <span type="morph" target="#d1e40 #d1e46" ana="#TONE"/> <span type="person" target="#d1e40 #d1e46" ana="#1PERS"/> <span type="number" target="#d1e40 #d1e46" ana="#SG"/> </spanGrp>`