repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
meejah/txtorcon | 107764420 | Title: onion: endpoint fallbacks + ephemeral
Question:
username_0: The "onion:" server-side listeners currently do the following:
1. if you specified a control-port in the onion-string, connect to that tor
2. call get_global_tor() and use the launched tor
As ephemeral services become available, it is possible to connect to system (or TBB) Tors to add onion services. This means: pass the private-key material in the string and:
1. if get_global_tor() ever got called, use that tor
2. try 9051 as a control-port, and add the onion if that worked
3. try 9151 as a control-port and add the onion if that worked
3. call get_global_tor ourselves and launch a new tor
Answers:
username_1: sounds good! |
RedlineResearch/QTR-tool | 530534397 | Title: Some classes aren't being instrumented (like java.lang.String)
Question:
username_0: There are some classes that aren't instrumented like java.lang.String.
The key is to determine what these classes are and find a way to instrument with javassist.
Answers:
username_0: The attached file lists the classes that are loaded before the Java agent instrumentation gets a chance to instrument. The ones that I believe we can safely ignored are listed in the **IGNORE** section at the end. These classes are:
- javassist.*
- veroy.research.qtrtool.javassist.*
We may not need to instrument all of the classes listed there. In particular, how do we treat:
- lambda classes like `java.lang.invoke.LambdaForm$BMH/1418481495`
- java.lang.ref.SoftReference
- java.lang.ref.SoftReference[]
- java.lang.ref.WeakReference
[helloworld-debug-2019.txt](https://github.com/RedlineResearch/QTR-tool/files/3909834/helloworld-debug-2019.txt)
username_0: I believe this is fixed as of 17 Feb 2020.
Status: Issue closed
|
openshift/origin | 148087040 | Title: Provide a way to distinguish hook pods from other deployer pods
Question:
username_0: All deployer pods (the strategy pod and hook pods) are assigned an `openshift.io/deployer-pod-for.name` label so they can be identified as deployers. However, it's also useful to further identify these pods by their specific role. For example, a label like `openshift.io/deployer-pod.type=strategy` or `openshift.io/deployer-pod.type=pre-hook`.
Answers:
username_1: Or annotation in this case. The hook type will be per strategy, so we should keep in mind that we have to reserve space for the strategy to define its own annotations.
username_2: @username_0 What is the use case for this? Are you going to filter by `openshift.io/deployer-pod.type` or you just want to add an annotation to deployer pod ?
username_3: Do we want to filter even between hooks? (pre, mid, post)
username_4: seems helpful
username_1: A naive client, looking at the pods, should know what they are for. The
closer we can get to the api structures and concepts, the better (i.e. a
simple strategy-hook-type field is enough)
username_3: Talked with @username_2 and it sounds reasonable to both of us now |
vercel/next.js | 1155873860 | Title: Docs: Buttons in learn page not showing text
Question:
username_0: ### What is the improvement or update you wish to see?
Buttons in nextjs.org/learn do not show text


### Is there any context that might help us understand?
No.
### Does the docs page already exist? Please link to it.
https://nextjs.org/learn
Answers:
username_1: Hi, this should be resolved now, thanks for reporting!
Status: Issue closed
|
harmony-one/harmony-ops | 561931627 | Title: a new tool/system to monitor account balances
Question:
username_0: we need to develop a new tool to monitor all the account balances on Harmony blockchain.
two new RPC calls will be developed. Please work with @janet-harmony with her /1h tool to monitor all the accounts in blockchain. And we need to keep track of the account balances in EVERY block. Another database need to be used to cache all the data.
https://github.com/harmony-one/harmony/issues/2216
https://github.com/harmony-one/harmony/issues/2217
This is a parallel backup of all blockchain account balances. It can be used for monitoring and accounting, and security checking. We need to come up with a plan / design doc, and stages of implementation.<issue_closed>
Status: Issue closed |
MicrosoftDocs/azure-docs | 648457462 | Title: Step 3 Port test
Question:
username_0: To what destination endpoints do we test ports 443 and 80? Or is there a utility available that we can use to verify connectivity to the correct endpoints?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 057be820-930e-4a15-fad0-3466f6cb0b13
* Version Independent ID: 4f34c185-0f69-28cf-0292-b721b36514c3
* Content: [Debug Application Proxy connectors - Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-debug-connectors)
* Content Source: [articles/active-directory/manage-apps/application-proxy-debug-connectors.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/manage-apps/application-proxy-debug-connectors.md)
* Service: **active-directory**
* Sub-service: **app-mgmt**
* GitHub Login: @kenwith
* Microsoft Alias: **kenwith**
Answers:
username_1: @username_0 Thank you for your feedback . We will investigate and update the thread.
username_1: @username_0 The step 3 actually tests outbound connectivity to the following URLs as listed in the [prerequisite page for setting up app proxy in your environment](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-add-on-premises-application#allow-access-to-urls) .
| URL | How it's used |
| --- | --- |
| \*.msappproxy.net<br>\*.servicebus.windows.net | Communication between the connector and the Application Proxy cloud service |
| mscrl.microsoft.com:80<br>crl.microsoft.com:80<br>ocsp.msocsp.com:80<br>www.microsoft.com:80 | The connector uses these URLs to verify certificates. |
| login.windows.net<br>secure.aadcdn.microsoftonline-p.com<br>\*.microsoftonline.com<br>\*.microsoftonline-p.com<br>\*.msauth.net<br>\*.msauthimages.net<br>\*.msecnd.net<br>\*.msftauth.net<br>\*.msftauthimages.net<br>\*.phonefactor.net<br>enterpriseregistration.windows.net<br>management.azure.com<br>policykeyservice.dc.ad.msft.net<br>ctdl.windowsupdate.com:80 | The connector uses these URLs during the registration process. |
You can allow connections to \*.msappproxy.net and \*.servicebus.windows.net if your firewall or proxy lets you configure DNS allow lists. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
I hope the above explains the same. The outbound endpoints are the ones which App proxy will use for proper operation. Outbound port 80 is needed for Downloading certificate revocation lists (CRLs) while validating the TLS/SSL certificate and outbound 443 is needed for all outbound traffic to the app proxy service in the cloud.
I have added a link to the step 3 or referring to Port section which describes the URL list as well. Hope this clarifies your point . We will now close this issue.
Status: Issue closed
|
relay-tools/react-relay-network-modern | 718854814 | Title: Terser drop_console: true breaks loggerMiddleware
Question:
username_0: Not a bug report, just maybe will help someone in the future
https://github.com/relay-tools/react-relay-network-modern/blob/e04d347f075f09d1124e99cf019db6819913c492/src/middlewares/logger.js#L13
with `drop_console: true` produces
```js
const t = e && e.logger || void 0;
```
also terser has a bug https://github.com/terser/terser/issues/323 where
```js
compress: {
// https://github.com/relay-tools/react-relay-network-modern/blob/5ad3420206132073659d0f83a7b75c88eef6334b/src/middlewares/logger.js#L13
pure_funcs: [
"console.log.bind",
"console.log.prototype.bind",
"console.error.bind",
"console.error.prototype.bind",
],
```
doesn't help :/
My "fix"
```js
loggerMiddleware({
logger: (...args: unknown[]) => {
// https://github.com/terser/terser/issues/323
// https://github.com/relay-tools/react-relay-network-modern/blob/master/src/middlewares/logger.js#L13
console.log("[RELAY-NETWORK]", ...args);
},
})
```<issue_closed>
Status: Issue closed |
Liverpool-UK/somebody-should | 114465973 | Title: Automated signage for bus lanes
Question:
username_0: As they change based on time periods it'd be good to have something at street level that shows when they're in service so people don't have to work from memory.
Answers:
username_1: They'd probably work best still as LED signs, however, https://futurecities.catapult.org.uk/2015/12/10/blog-can-less-intrusive-technologies-help-keep-green-spaces-visually-quiet/ has some interesting city signage ideas |
dell/OpenManage-Enterprise | 746952906 | Title: @odata.nextLink is in data when using the top odata query
Question:
username_0: In all of our programs we check for the presence of `@odata.nextLink` and if it is present we continue to iterate. I noticed when using the top query I would end up getting all objects in OME, but it would take a *very* long time. On further inspection this is because `@odata.nextLink` is present even though there *shouldn't* be a next page because in my case there is only one result. Is this behavior correct?

Answers:
username_1: The usage of $top over-rides the pagination constructs internally (as in if page size is x and top is set to y where y > x, then y results are returned in the response) - there isn't a clear spec entry that I could find (and on discussion with colleagues) that indicates that the nextLink shouldn't exist in the returned response, though it does seem like it would be preferable behavior.
Even if we change the behavior to drop the nextLink in the returned response, , the code may need to change to not handle nextlink when using $top in the GET request, since older versions of OME / OME-M will continue returning it.
username_0: @username_1 I'll give it some thought and come up with a stable way to handle the behavior.
Status: Issue closed
|
nilearn/nilearn | 365863060 | Title: Code refactoring necessary for future ease of maintenance and development
Question:
username_0: Also for Nistats: https://github.com/nistats/nistats/issues/239
There are many functions in Nilearn that are super long, some as much as ~100 lines and do multiple things within the same function body.
There are many difficulties with such an approach. It makes it impossible to do Test-Driven Development, the recommended way of programming. It also means new functions have to work around existing functionality, which can not be segregated from existing functions due to this reason.
In the long-term, this code will become increasingly fragile and difficult to maintain. It also sends the message to new contributors and for contributions that this is an acceptable way to write new code.
As a rule of software engineering, each function should do one, and ONLY one thing. In python, this usually means 5-7 lines in a function. So instead of a 100 line function, we should (for example), have 20 functions around 5 lines each. All called by an encompassing function.
This also ensures we can write more atomic unit tests for each of those small functions.
I intend to start refactoring this code. This is a massive undertaking, so I will do this slowly and piecemeal, while adding more atomic unit tests. I'll start with nistats. |
miguelgrinberg/Flask-SocketIO | 468248854 | Title: problem using ssl Cert
Question:
username_0: I am using flask-socketio to serve a webpage. at the run field i a putting the certificates like this
`socketio.run(app, debug=True, keyfile='key.key', certfile='cert.cert')`
The server starting but when i want to connect to interface i am taking the following message
`ssl.SSLError: [SSL: HTTP_REQUEST] http request (_ssl.c:2292)`
furthermore from the browser (firefox) i am taking that message
`Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://127.0.0.1:5000/socket.io/?EIO=3&transport=polling&t=MlsiDhX. (Reason: CORS request did not succeed).`
those two issues are related? what i have to do?
Answers:
username_1: I would start by removing the SSL and determine if there are any other issues.
username_0: If I remove ssl certificates everything works just fine.
username_1: Why does the CORS error say `http://...` and not `https://...`? Any chance you are trying to connect over http by mistake?
username_0: No I use https to connect to app. Is there any way to use anything else such as regular socketio or something else in order to achieve secured connection?
username_1: If you are using https:// then why is the browser saying that you used http:// in the error message? That seems like a good clue that is related to your problem, so we need to understand what's going on there.
Status: Issue closed
|
NOAA-PMEL/LAS | 288199764 | Title: New time axis widgets don't work correctly on constrain page for user-defined variables
Question:
username_0: **Reported by @kevin-obrien on 22 Oct 2007 22:11 UTC**
After defining an xy average of the Air Temp variable in coads - the constraints page does not display the time selection widgets at all. I've attached an image of what the constraints page looks like.
A javascript error pops in firebug that says:
```
uncaught exception: DateWidget ERROR: render: menu_set "$axis.getDMYT()" must contain only characters from "YMDT"
```
Migrated-From: http://dunkel.pmel.noaa.gov/trac/las/ticket/278
Answers:
username_0: **Comment by @noaaroland on 23 Oct 2007 00:08 UTC**
Fixed in r2077. |
RSS-Bridge/rss-bridge | 594422653 | Title: [BandcampDailyBridge] Default value for content input always returned when used in Franchises context
Question:
username_0: I know it's strange to be opening a bug report for my own bridge but I've run in to a odd bug which I can't seem to fix.
**Describe the bug**
When viewing any feed from the 'Franchises' context, the default value for content is always returned by `getInput()`
Eg: `?action=display&bridge=BandcampDaily&context=Franchises&content=album-of-the-day&format=Html` will return the lists feed (the default) not the Album of the day feed.
**To Reproduce**
Steps to reproduce the behavior:
1. Add bridge (#1485)
2. Select any of the options other than 'lists' from the content drop down for Franchises.
3. View feed as HTML
4. The default 'lists' feed will be displayed regardless of the given content input value.
**Expected behavior**
The selected content feed should be returned.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 10
- Browser: Firefox
- Version: git.master.366d2d6
- PHP version 7.4.4
Answers:
username_0: Moving the 'Franchises' array to the end of the PARAMETERS array seems to fix the issue. 🤨
https://gist.github.com/username_0/f406256541d84ba2122070d4cc16cd21
Status: Issue closed
|
javiereguiluz/EasyAdminBundle | 54504029 | Title: Allow to easily change the color scheme of the backend
Question:
username_0: Although you cannot select the theme to display in EasyAdmin, you can easily link to your own CSS files to customize the backend to your needs.
However, my guess is that most of the time the backend design is "good enough" for simple backends except for one single thing: the colors of the backend don't match the corporate colors of the company/organization.
From the developer point of view, I'd like to provide something like this:
```yaml
easy_admin:
design:
color_scheme: {
'content_background': '#FFF',
'main_menu_background': '#CCC',
'main_menu_contents': '#222',
'accent': '#73D631',
'link': '#4083A9'
}
```
We would define 5 or 6 variables for the colors of the most important page elements and this way, everyone could easily tweak the interface just by changing 5 configuration options.
The problem is that I don't know how to actually do something like this. I don't want to force people to use Assetic and I won't use any frontend tool for this. Anyone know how can we solve this problem?
Answers:
username_1: What about providing the skeleton of the CSS file as a Twig template and then compile it with a console command to the final CSS file?
username_2: Or else, if you use the bundle inheritence system, you could also use blocks in twig, in order to add/change some elements. I'm thinking about banners, brand pictures or footer, or even changing the Icon's user with a gravatar one, etc.
All of these could be in classic twig blocks, or even variables (but maybe less "clean") , so in your inheritence system you can have a "layout" twig template that extends a "design" template (or vice-versa) , the last extending EasyAdmin base layout.
I hope I'm not too confusing :stuck_out_tongue_closed_eyes:
username_0: I still don't know how to solve this problem, because I'm against using Assetic or frontend-stuff tools. Right now, the best solution I've come up is to replace the static `admin.css` file by a `admin.css.twig` template and render it using the admin controller:
```css
/* Original CSS code */
span.badge {
background-color: #D47843;
}
```
```twig
{# New template code #}
span.badge {
background-color: {{ main_color|default('#D47843') }};
}
```
The advantage is that the user can customize an entire backed just by defining some options in the `config.yml` file. No asset compiling or installing, changes are applied instantly, almost zero effort by the user:
```yaml
easy_admin:
color_scheme: { main_color: '#CC1414', ... }
```
The performance hit should be minimal because even if the template is long, its logic is trivial.
What do you think about this idea? If you think it's stupid, you can freely say so :)
username_2: If you only want to change the color scheme, then this method is probably the best, for sure.
But there's no need to make the AdminController heavier than it is, just create an "AssetsController" to handle these routes :laughing:
username_2: I'm coming back in this issue:
Why don't you set all the colors directly in the css files, and then if there are some overrides in the color scheme, add them in a `<style>` tag in the head?
I mean, for example, you keep your badge colors in the css at first.
Next, if the user sets the `badge_color` under the `color_scheme` param, then you just add it in the view like this:
```twig
<style type="text/css">
{% if config.color.scheme.badge_color %}
span.badge { background-color: {{ config.color.scheme.badge_color }}; }
{% endif %}
</style>
```
I know it sounds heavy in terms of code, but it's slightly lighter than creating a new controller...
username_0: @username_2 thanks for your proposal. It's true that it looks simple ... but I think it would be more complex that it looks at first. Why? To ensure visual integrity, the default theme will only allow to define a small amount of colors (ideally 1, the main color; but probably 4 or 5 to also style the sidebar, etc.)
This means that a lot of different elements (with very long CSS selectors) will use the same color and we would have to extract+copy+paste 20 or more lines to change one color. I'm not rejecting your proposal yet, but I have to think more about the pros and cons of each proposal.
username_2: I asked my team about this issue, and in fact, both developers and html/css integrators agree with the fact that the `assets override` feature you already implemented is really sufficient, because you can inject a single css file to completely change the backend look. The other lead-dev also added that any developer using EasyAdmin knows how to inject css and how to change the back-end look, whether he knows the best practices or not. He'll simply search for the css selectors with "firebug-like" tool to override it in his own, and if he doesn't, a front-end specialist will do it.
I think that some code lines in the `design` section in the docs might be enough to give the developer the css selector he needs, but IMO and in my colleagues' opinion, what EasyAdmin already provides is really enough.
Hope it might help ;)
username_2: May I come back on this issue for more discussion ?
I've been looking at what is possible to ensure layout customization is easy.
And what I saw when talking about colleagues about customizing EasyAdmin's backend is simple : layout override.
The layout override is the most used way of changing the design, and right now we don't have the possibility to do this.
I've been thinking about adding a new template layer to our layout, just by using the twig blocks system.
How to do it ?
Every view extends the `logical_layout`, in which we have all must-have elements in the admin layout (such as navbars, assets). This `logical_layout` extends a `skeleton_layout`, in which we ONLY have the html "top-frame" code, and all the blocks names we want to use and that will be filled by the `logical_layout`.
Then, we use a simple request listener like the one on [Orbitale CmsBundle](https://github.com/Orbitale/CmsBundle/blob/master/EventListener/LayoutsListener.php) : we inject the layout configuration (parsed with the Configuration and Extension classes), and the `logical_layout` extends this configured layout, still like in [Orbitale CmsBundle's views](https://github.com/Orbitale/CmsBundle/blob/master/Resources/views/Front/index.html.twig#L2), but in EasyAdmin we put this dynamic extension inside the `logical_layout` in order to show the navs and assets we need.
Then, a simple documentation chapter might tell the developer how to customize his back-end, and the advantage we have is that the proposed default layout will be so much a skeleton that the developer will only have to copy/paste it inside his own layout.
I start to think that back-end design shouldn't be changeable in configuration, but instead should be configurable directly inside a twig template, even if this one only contains variables setters like classes and colors by using Twig blocks.
What do you think about this?
username_0: Closing it because it's superseded by #208
Status: Issue closed
|
reefab/foobot_async | 314364710 | Title: Remove the version pinning
Question:
username_0: The version of aiohttp is pinned to [2.3.10](https://github.com/username_2/foobot_async/blob/master/setup.py#L16). Please remove that pinning.
Answers:
username_1: Thank you ... but how? :)
username_0: @username_1, the `foobot` platform is pulling in `foobot_async` and the version pinning is blocking the installation as Home Assistant requires aiohttp-3.1.x which is already present.
This needs to be fixed here and will require a new release of `foobot_async` which we can use for Home Assistant.
username_2: This has been fixed.
The package has been updated with the version 0.3.1 which removes the aiohttp version pinning (and support for python 3.4).
Status: Issue closed
username_2: I made a PR to update the package version in HA:
https://github.com/home-assistant/home-assistant/pull/13942
username_0: Thanks for the quick response. |
saltstack/salt | 334217390 | Title: salt attempts to make ~ directory if using tab for autocomplete rather than creating files in home directory
Question:
username_0: ### Description of Issue/Question
salt attempts to make ~ directory in current directory if using tab for autocomplete rather than creating files in home directory
### Setup
(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)
changed group of relevant files/folders to allow salt to use auto completion with tabs
### Steps to Reproduce Issue
(Include debug logs if possible and relevant.)
change group of all folders and files in /etc/salt /var/log/salt to allow read for members in the salt group.
place the current user in the salt group.
type salt press <tab> if you are in a folder with write permissions salt will create a directory named '~' if you are in a folder that you don't have write permissions
Also works if not all files and folders in /etc/salt /var/salt are readable by user
### Versions Report
(Provided by running `salt --versions-report`. Please also mention any differences in master/minion versions.)
Salt Version:
Salt: 2018.3.1
Dependency Versions:
cffi: 1.11.5
cherrypy: 3.5.0
dateutil: 2.4.2
docker-py: Not Installed
gitdb: 0.6.4
gitpython: 1.0.1
ioflo: Not Installed
Jinja2: 2.8
libgit2: 0.26.0
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.3
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: 2.18
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: 0.26.0
Python: 2.7.12 (default, Dec 4 2017, 14:50:18)
python-gnupg: 0.3.8
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: Ubuntu 16.04 xenial
locale: UTF-8
machine: x86_64
release: 4.4.0-127-generic
system: Linux
version: Ubuntu 16.04 xenial
can confirm that this was an issue in 2018.3.0 as well
Answers:
username_0: This only seems to be happening on my testing system right now so looking into this further. I did have the salt.bash script installed on my test cluster where I don't on my production system so checking to see if it has something to do with that.
I will follow up if removing the salt.bash script seems to fix it.
Status: Issue closed
username_0: removing the salt.bash script stopped this issue, It was probably a few revisions behind since I initially installed it with version 2017.7 since this is not crucial for me I will close this issue and re-open if I find an issue with the new salt.bash script |
checkmarx-ltd/cx-flow | 675257679 | Title: PR comment shows empty tables when there are no results
Question:
username_0: ### Describe the problem
This happens in PR comments when there are zero results to report in the summary due to the reporting filter criteria. An example would be for configurations set to only report on High and Medium results when scans yield only Low/Information results. The PR comment has the "Checkmarx Scan Complete" header and also contains 2 empty tables that only show the table headers.
### Proposed solution
Always show the "Checkmarx Scan Complete" header in post-scan PR comments but indicate there are no results to report instead of showing truncated tables.
### Additional details
Customer request.<issue_closed>
Status: Issue closed |
threefoldfoundation/tfchain | 443775825 | Title: JS bug in explorer Coin Output page for a coin-creation Output
Question:
username_0: ```
[Error] TypeError: undefined is not an object (evaluating 'explorerHash.transactions[i].rawtransaction.data.coininputs.length')
appendCoinOutputTables (hash.js:1430:83)
populateHashPage (hash.js:1553)
buildHashPage (hash.js:1733)
Global Code (hash.js:1946)
```
e.g. currently on <https://explorer.threefoldtoken.com/hash.html?hash=7758038b91239101aaa72b865934bbbdf4268766508f47258145307b8a38e826>.
The result is that you do not see the spend transaction:
<issue_closed>
Status: Issue closed |
GemTalk/Rowan | 756519154 | Title: Would like the concept of component level isDirty
Question:
username_0: Similar to https://github.com/GemTalk/Rowan/issues/172, it would be helpful to have the concept of component dirtiness.
Answers:
username_1: Would that be a component is dirtied when it is internally changed or when the packages contained by the component are dirty ... and would that be for *CategoryComponents* only?
username_0: The purpose is to support Jadeite giving a visual indication to the user that a component's package(s) have been dirtied. So when the packages contained by the component are dirty - for now.
My understanding is that _CategoryComponents_ are not to be displayed in the browser, so loaded components (i.e. visible browser components) only. |
wpilibsuite/RobotBuilder | 665628887 | Title: NewCommandFramwork m_chooser SmartDashboard.putData missing
Question:
username_0: It seems that the putData function is missing for the SendableChooser that is created when robotbuilder generates the project. Seeing that m_chooser is hardcoded in the RobotContainer.java exporter a simple entry for the ` SmartDashboard.putData(m_chooser);` just after the `// END AUTOGENERATED CODE, SOURCE=ROBOTBUILDER ID=AUTONOMOUS` seems inline.<issue_closed>
Status: Issue closed |
mlkrantz/amazing-project | 335464368 | Title: please make these projects private
Question:
username_0: Hi Matt, it looks like you're no longer enrolled at dartmouth, but having these projects public violates the honor code, and obviously we don't want current cs50 students finding them (I'm TAing this term) so I'm hoping you can make these projects private for us. Thanks |
jenkinsci/helm-charts | 696731913 | Title: Do not restart Jenkins if it's not necessary
Question:
username_0: As a result of this with every update Jenkins restarts whenever a newer helm chart is used even if there are no changes to the `Deployment` or the `ConfigMap`.
The `configAutoReload` feature already does a great to avoid restarts for configuration changes done via via JCasC. It would be great if we could avoid restarts for updates in case they are not required.
**Describe the solution you'd like**
I suggest to not render the version of the Chart in that label. So instead of `{{ .Chart.Name }}-{{ .Chart.Version \| replace "+" "_" }}` just `{{ .Chart.Name }}`.
You could still tell directly from the resource which chart was used to render the resource. Only the version would be missing.
I think that's not really a problem as one could run `helm list` to figure out which chart version is currently in use.
**Describe alternatives you've considered**
- Removing the `helm.sh/chart` annotation completely
This would also work, but you could no longer see that the resource was rendered by a helm chart.
- Introducing a new flag in values.yaml to conditionally disable rendering the label
That's possible, but I would prefer to keep it straight forward and not introduce new flags just for this.
Answers:
username_1: I'd like to work on this issue :)
username_0: Bonus points if you add unit tests for labels. Those are missing so far.
username_1: I'm currently trying out if that change really make sense but it seems like i'm either understanding something wrong or the change wouldn't have the desired effect.
Under [Updating a Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) it states that "A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed".
I even tested it and sure enough: changing the label alone doesn't trigger a rollout for me
username_0: I think the description of this issue should have been updated.
Deployment pod spec and the ConfigMap had those labels. Both have already been removed when I added unit tests.
So this is less about avoiding restarts as there should not be any restarts at the moment. It's rather a convenience things for people who store all rendered manifests in version control and deploy those via Flux or Argo CD.
In that case the update `helm.sh/chart` label shows up in git diff even thought it's the only change to that resource. I could imaging that people who use helm alone would like to keep that label. Those who use Argo CD or Flux would prefer to remove them. So probably the best way to deal with this would be a value, which allows to remove that label.
username_1: Thanks for the clarification, I'll implement it accordingly :)
username_1: @username_0 Could you take a look at the #102 PR? I'm facing an issue where i don't exactly know why the tests are failing
Status: Issue closed
|
dusterio/lumen-passport | 644841923 | Title: it always return invalid_client
Question:
username_0: Request Method : POST
Url : v1/oauth/token
Request Body
`{
"grant_type":"password",
"client_id":1,
"client_secret":"<KEY>",
"password":"<PASSWORD>",
"scope":""
}`
Response :
`{
"error": "invalid_client",
"error_description": "Client authentication failed",
"message": "Client authentication failed"
}`
User Model
`class User extends Model implements AuthenticatableContract, AuthorizableContract
{
use Authenticatable, Authorizable, HasApiTokens;
protected $table = 'users';
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = [
'user_id',
'user_name',
'user_email',
'user_password',
];
/**
* The attributes excluded from the model's JSON form.
*
* @var array
*/
protected $hidden = [
'user_password',
];
}
`
Answers:
username_1: I have the same problem here. Always return invalid_client. I'm stuck here for days. Can anyone please help us to solved this problem?
username_2: up, having the same error,
username_3: I could solve this problem by runing:
- php artisan passport:client --password
and using this client for password verifications |
valnub/Framework7-Plugin-Welcomescreen | 340701398 | Title: Welcomescreen Not Close on Android
Question:
username_0: Hello,
I test the plugin in my PC and everything works fine,
but when I build the app and open it on android,
the app.welcomescreen.close(); not working.
The skip button is working but when I press app.welcomescreen.close(); to close the welcome screen nothing happens.
I use the latest Framework7.
Answers:
username_1: Hey, can you post the js error (if any) and some of your code? Thanks
username_0: Hello @username_1 ,
I have no js Error.
My code is the above:
`$$('#accountButton').click(function(){ app.welcomescreen.close(); });`
On click nothing happens.
I also manage to get it works. I open your library and in the init I put the above code:
```
Dom7(document).on('click', '.accountButton', function (e) {
e.preventDefault();
var $wscreen = Dom7(this).parents('.welcomescreen-container');
if ($wscreen.length > 0 && $wscreen[0].f7Welcomescreen) { $wscreen[0].f7Welcomescreen.close(); }
});
```
But I think that this isn't the best solution.
Thank you
username_1: Hey, can you upload a working demo that replicates the problem somewhere so I can look into this? thanks
Status: Issue closed
|
patternfly/patternfly-org | 437800800 | Title: spacing
Question:
username_0: spacing between the spacing text and the space graphic (16px)

Laptop
https://www.patternfly.org/v4/design-guidelines/styles/typography<issue_closed>
Status: Issue closed |
ikedaosushi/tech-news | 495368797 | Title: Cloudwatch Logsの複数ロググループ参照に便利なツールutern
Question:
username_0: Cloudwatch Logsの複数ロググループ参照に便利なツール「utern」<br>
複数のLambdaのログ確認したい時とかECSによるマイクロサービスで、複数のCloudwatch Logsグループのログを確認する時に、aws cliでは面倒で困ってました。 同じ悩みを持った人いそうだなぁ...<br>
https://ift.tt/2zfa8x1 |
openshift/origin | 356716067 | Title: OKD 3.9 Ansible Service Broker ETCD certificate problem
Question:
username_0: i have a problem with the Ansible Service Broker. When i view the events in the default project, i get
```
"Error getting catalog payload for broker "ansible-service-broker"; received zero services; at least one
service is required".
```
I am suspecting that this issue is due to an error in the ASB-ETCD. The logs of the ASB-ETCD pod show the following error
```
| 2018-09-03 15:41:10.840386 I |embed: ClientTLS: cert = /etc/tls/private/tls.crt, key = /etc/tls/private/tls.key, ca = , trusted-ca = /var/run/etcd-auth-secret/ca.crt, client-cert-auth = true, crl-file =
| 2018-09-03 15:41:10.840984 I \| etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380 ] to cluster cdf818194e3a8c32
| 2018-09-03 15:41:10.843804 N \| etcdserver/membership: set the initial cluster version to 3.3
| 2018-09-03 15:41:10.843851 I \| etcdserver/api: enabled capabilities for version 3.3
| 2018-09-03 15:41:12.271738 I \| raft: 8e9e05c52164694d is starting a new election at term 2
| 2018-09-03 15:41:12.271760 I \| raft: 8e9e05c52164694d became candidate at term 3
| 2018-09-03 15:41:12.271779 I \| raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 3
| 2018-09-03 15:41:12.271792 I \| raft: 8e9e05c52164694d became leader at term 3
| 2018-09-03 15:41:12.271800 I \| raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 3
| 2018-09-03 15:41:12.277938 I \| embed: ready to serve client requests
| 2018-09-03 15:41:12.278333 I \| etcdserver: published {Name:default ClientURLs:[https://asb-etcd.openshift-ansible-service-broker.svc:2379 ]} to cluster cdf818194e3a8c32
| 2018-09-03 15:41:12.283689 I \| embed: serving client requests on [::]:2379
| 2018-09-03 15:41:12.293754 I \| embed: rejected connection from "127.0.0.1:53690" (error "tls: failed to verify client's certificate: x509: certificate signed by unknown authority", ServerName "")
| WARNING: 2018/09/03 15:41:12 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.
```
##### ASB configmaps
```
registry:
- type: dockerhub
name: dh
url: https://registry.hub.docker.com
org: ansibleplaybookbundle
tag: latest
white_list: [".*-apb$"]
user: "xxxx"
pass: "<PASSWORD>"
- type: local_openshift
name: localregistry
namespaces: ['openshift']
white_list: [.*-apb$]
dao:
etcd_host: asb-etcd.openshift-ansible-service-broker.svc
etcd_port: 2379
etcd_ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
etcd_client_cert: /var/run/asb-etcd-auth/client.crt
etcd_client_key: /var/run/asb-etcd-auth/client.key
log:
stdout: true
level: info
color: true
openshift:
host: ""
ca_file: ""
bearer_token_file: ""
namespace: openshift-ansible-service-broker
sandbox_role: edit
image_pull_policy: Always
keep_namespace: false
keep_namespace_on_error: true
broker:
dev_broker: false
bootstrap_on_startup: true
[Truncated]
recovery: true
ssl_cert_key: /etc/tls/private/tls.key
ssl_cert: /etc/tls/private/tls.crt
auto_escalate: False
auth:
- type: basic
enabled: false
```
##### Version
openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
##### Expected Result
ASB works without problem
##### Additional Information
[ansible inventory](https://gist.github.com/username_0/f554446fc3004dd513c3ef0206afe597)
Answers:
username_1: @openshift/sig-ansible-service-broker
username_2: The etcd log looks normal EXCEPT ` | 2018-09-03 15:41:12.293754 I \| embed: rejected connection from "127.0.0.1:53690" (error "tls: failed to verify client's certificate: x509: certificate signed by unknown authority", ServerName "")` that doesn't look right.
username_0: which certificate does this etcd uses ? I am using a let's encrypt certificate for the master and all the routes are using it without any problem. With the information provided in this error I'm unable to figure out the cause of problem. |
FarGroup/FarManager | 790865123 | Title: Introduce configuration for fast file search (Alt+Char)
Question:
username_0: I use Far in Windows and Midnight Commander (MC) in Linux. Fast file search works in Far and MC differently. It would be really greate if in Far there will be an **configuration option** to initiate by fast file search by `Alt+S`, like in MC.
Answers:
username_1: In fact Far already have such option but in general form: I mean macros.
So you can take this for example: https://github.com/FarGroup/FarManager/blob/master/extra/Addons/Macros/AltSearch.lua
After adapted to your need:
```lua
Macro {
description="AltS to search by name (like MC)";
area="Shell"; key="AltS";
action = function()
Keys('Alt<')
end;
}
```
Status: Issue closed
username_0: Thank yo so much! It works!
username_0: I use Far in Windows and Midnight Commander (MC) in Linux. Fast file search works in Far and MC differently. It would be really greate if in Far there will be an **configuration option** to initiate by fast file search by `Alt+S`, like in MC.
username_0: @username_1 can you please suggest, how to block opening filw quick search by any other Alt+Char excepting Alt+S?
username_1: Try this:
```lua
Macro {
description="Suppress FastFind function";
area="Shell"; key="/Alt./";
priority=40;
action = function() end;
}
```
username_2: @username_1 , it should be `key="/.Alt./"` or `key="/[LR]Alt./"`. |
TheUnderTaker11/GeneticsReborn | 385464536 | Title: [Suggestion] Reflex Gene
Question:
username_0: I'd like to see some genetics around reactions, like
if hit, then blindness 2 at attacker
or
if enters water, then night vision at self
something of the sort.
Answers:
username_1: I get where you are coming from, but the genes really apply to you the player, you don't really give the effect to others. However the following is coming in the next release:
- A "claws" gene that will inflict bleed when you hit a mob
- Bottles of Viral agents which are splash potions (that stack!) to inflict dramatic negative effects.
- A "thorns" gene that will damage the attacker when the player is hit.
Those together should cover a lot of what you are discussing.
Status: Issue closed
|
Slava11121987/Work | 930864472 | Title: Не корректное решение
Question:
username_0: https://github.com/Slava11121987/Work/blob/1d59350dfe4f4579cfde8179e3b4f212973f52fb/DZ3-3/main.cpp#L8
Данная программа решает конкретную задачу. А должна решать общуу.
То есть начальнео чило может быть и 2342346 |
PrefectHQ/prefect | 653264322 | Title: Setting `flow_run_id` in local runs breaks slack notifier
Question:
username_0: #2868 made some additions to local runs so that targets/results could be templated properly both locally and in runs using a backend API. Setting `flow_run_id` on local runs here:
https://github.com/PrefectHQ/prefect/blob/30c88b315c83748b114228703d42bfe3e7d6a763/src/prefect/core/flow.py#L964-L969
breaks the `slack_notifier` because it has an extra check for a `flow_run_id` in context.
https://github.com/PrefectHQ/prefect/blob/30c88b315c83748b114228703d42bfe3e7d6a763/src/prefect/utilities/notifications/notifications.py#L151
I think this should be resolved by making the backend functionality of the slack notifier opt-in instead of default when a flow_run_id is present.<issue_closed>
Status: Issue closed |
Egoistically/ALAuto | 541289036 | Title: no modelu namen cv2
Question:
username_0: sorry im new to this stuff, im trying to run this on my phone
C:\Users\utente\Desktop\ALAuto-master>python ALAuto.py
Traceback (most recent call last):
File "ALAuto.py", line 3, in <module>
from modules.combat import CombatModule
File "C:\Users\utente\Desktop\ALAuto-master\modules\combat.py", line 3, in <module>
from util.utils import Region, Utils
File "C:\Users\utente\Desktop\ALAuto-master\util\utils.py", line 1, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'
Answers:
username_1: Give more details...
username_0: sorry im new to this stuff, im trying to run this on my phone
C:\Users\utente\Desktop\ALAuto-master>python ALAuto.py
Traceback (most recent call last):
File "ALAuto.py", line 3, in <module>
from modules.combat import CombatModule
File "C:\Users\utente\Desktop\ALAuto-master\modules\combat.py", line 3, in <module>
from util.utils import Region, Utils
File "C:\Users\utente\Desktop\ALAuto-master\util\utils.py", line 1, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'
username_1: `pip3 install -r requirements.txt`
username_0: i already had the requiremesnts installed
`C:\Users\utente\Desktop\ALAuto-master>pip3 install -r requirements.txt
Requirement already satisfied: opencv-python==3.4.4.19 in c:\users\utente\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 1)) (3.4.4.19)
Requirement already satisfied: scipy==1.1.0 in c:\users\utente\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 2)) (1.1.0)
Requirement already satisfied: imutils==0.5.3 in c:\users\utente\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 3)) (0.5.3)
Requirement already satisfied: numpy>=1.14.5 in c:\users\utente\appdata\local\programs\python\python37\lib\site-packages (from opencv-python==3.4.4.19->-r requirements.txt (line 1)) (1.17.4)
WARNING: You are using pip version 19.2.3, however version 19.3.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C:\Users\utente\Desktop\ALAuto-master>python ALAuto.py
Traceback (most recent call last):
File "ALAuto.py", line 3, in <module>
from modules.combat import CombatModule
File "C:\Users\utente\Desktop\ALAuto-master\modules\combat.py", line 3, in <module>
from util.utils import Region, Utils
File "C:\Users\utente\Desktop\ALAuto-master\util\utils.py", line 1, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'`
username_2: Have you installed different versions of Python on your computer (for example you have installed both Python 2.7 and Python 3.7)?
Status: Issue closed
username_0: Thank you for your help, i did have another copy of python(microsoft store) because i couldt use pip when i first downloaded it.
Uninstalled the other version on python and ran the program. |
ESMValGroup/ESMValTool | 893657784 | Title: Monthly ESMValTool meeting June
Question:
username_0: The next monthly ESMValTool meeting will be in June. A poll is available where you can choose preferred days and times, please do so before May 24 if you would like your preference to be taken into account:
https://doodle.com/poll/n7fix3g5grivez4g
If none of the proposed times work for you, please let us know by commenting on this issue.
The duration will be one hour. [Join the zoom meeting](https://us02web.zoom.us/j/86356385817).
Calendar invitations to the meeting are sent to the mailing list, join it [here](https://docs.esmvaltool.org/en/latest/introduction.html#user-mailing-list).
@ESMValGroup/esmvaltool-developmentteam
Format and agenda
- Anyone with an interest in ESMValTool is welcome to join and ask questions
- We start the meeting by doing a round where everyone gets one minute to introduce themselves and tell what they will be working on in the next month or what they would like help with.
- Using the input from the introduction round, we can decide to talk about specific topics with the entire group or make smaller breakout groups to continue the discussion there.
- If you already know that you would like to bring up a particular topic in the meeting, you can mention the GitHub issue on the topic in a comment in this issue. That will make it easy to find for other people in the meeting and we can already start discussing there before the meeting. Just create a new GitHub issue about your topic if there is no open issue yet.
- GitHub issues are our preferred way to communicate outside the meeting and keep track of work, so any feature requests/action items/etc discussed in the meeting should ideally result in a GitHub issue.
Notes from the April meeting can be found here: https://github.com/ESMValGroup/ESMValTool/issues/2086#issuecomment-816757772, while updates from the May workshop can be obtained as described here: https://github.com/ESMValGroup/ESMValTool/issues/2067#issuecomment-834379598.
Answers:
username_0: @ESMValGroup/esmvaltool-developmentteam I am very sorry, but I will not be available to organize the meeting at the dates initially in the doodle. I have updated the poll with new dates, could you please have another go at filling it?
username_0: Thanks for joining this morning! Here is a summary with some of the highlights of the meeting:
**Getting started**
@annefou and <NAME> were just getting started with the tool and were generally quite happy with the tutorial. A useful selection of information on getting started with [git](https://the-turing-way.netlify.app/reproducible-research/vcs.html) and [GitHub](https://the-turing-way.netlify.app/collaboration/collaboration.html) can be found in the linked articles from [The Turing Way](https://the-turing-way.netlify.app/welcome.html).
**IPCC AR6**
Guidelines have been sent out by @username_2 and @ledm to all IPCC AR6 authors that used the ESMValTool for creating their figures on how they can proceed to integrate those diagnostics into the ESMValTool.
**New projects**
Several people announced they would be starting new projects with the tool! :partying_face: Please add the details yourself if you would like to share
**New diagnostics overview paper published**
@katjaweigel Successfully got the [Earth System Model Evaluation Tool (ESMValTool) v2.0 – diagnostics for extreme events, regional and impact evaluation, and analysis of Earth system models in CMIP](https://gmd.copernicus.org/articles/14/3159/2021/gmd-14-3159-2021.html) diagnostics overview paper published :tada: She is still looking for reviewers for the last pull requests #2153 and #2156. Please give her a hand @ESMValGroup/tech-reviewers and @ESMValGroup/science-reviewers!
**CORDEX support**
@thomascrocker Mentioned that he is using the tool for analyzing CORDEX data and running into various issues. In IS-ENES3 there is some work planned before the end of this year to improve support for CORDEX data, this is led by @zklaus of SMHI and from the IS-ENES3 proposal it looks like a 5 person month contribution is also expected from CMCC, though I do not know who of CMCC will work on this. @zklaus or @username_2 Could one of you say more about this?
**Support for DCCP data**
@username_1 Is working on improving support for DCCP data (https://github.com/ESMValGroup/ESMValCore/issues/1120#issuecomment-843030860) and will organize a meeting with @jvegasbsc @schlunma @zklaus @username_0 and anyone else who is interested in the topic, to speed up the process of deciding how the recipe needs to be adjusted to cater for this type of data.
**Summary of ESMValTool workshop now available**
@axel-lauer Announced that a summary of the May ESMValTool workshop is now available [on esmvaltool.org](https://www.esmvaltool.org/meetings.html).
**Presentations from IS-ENES3 workshop available**
The presentations of the IS-ENES3 Virtual workshop on requirements for a fast and scalable evaluation workflow are available [here](https://is.enes.org/events/workshops/is-enes3-virtual-workshop-on-requirements-for-a-fast-and-scalable-evaluation-workflow) and @username_1 mentioned that the workshop was quite successful.
**Reading climate model output in its native format**
@zklaus @senesis and @bsolino are making good progress in adding support for reading the output of various climate models in directly with the ESMValCore
If I forgot to mention your update, please forgive me and add it yourself.
username_1: @username_1 Is working on improving support for DCCP data (ESMValGroup/ESMValCore#1120 (comment)) and will organize a meeting with @jvegasbsc @schlunma @zklaus @username_0 and anyone else who is interested in the topic, to speed up the process of deciding how the recipe needs to be adjusted to cater for this type of data.
Please fill in this doodle if you want to participate in the discussion: https://doodle.com/poll/4vaghiw3iu4bcikn?utm_source=poll&utm_medium=link
username_2: I'm not directly involved in this IS-ENES3 task but I know that @francesco-cmcc is using the ESMValTool for CORDEX data. Feel free to add here other IS-ENES3 members working with CORDEX.
Status: Issue closed
|
m09/deckz | 597370534 | Title: pouet
Question:
username_0: `$ deckz init
17:34:52 deckz.cli Copying /nas/data/FORMATIONS/slides/templates/targets.yml to current directory
Traceback (most recent call last):
File "/home/fmg/.virtualenvs/slides/bin/deckz", line 11, in <module>
load_entry_point('deckz', 'console_scripts', 'deckz')()
File "/nas/data/FORMATIONS/deckz/deckz/__main__.py", line 29, in main
return handler(**vars(args))
File "/nas/data/FORMATIONS/deckz/deckz/cli.py", line 176, in init
shutil_copy(str(paths.template_targets), str(paths.working_dir))
File "/home/fmg/opt/miniconda/lib/python3.7/shutil.py", line 248, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/home/fmg/opt/miniconda/lib/python3.7/shutil.py", line 120, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: '/nas/data/FORMATIONS/slides/templates/targets.yml'
`
Answers:
username_1: t'as pas pull depuis slides j'pense
username_0: peut-être :-)
ça marche beaucoup mieux après le pull.
par contre j'ai été obligé de stash parce que mon poetry.lock avait changé.
je fais comment pour annuler la modif/stash/pouet/truc ?
Status: Issue closed
|
SasView/sasview | 427259464 | Title: Request for (x,y,e) readback from graph on mouse over (Trac #185)
Question:
username_0: Migrated from http://trac.sasview.org/ticket/185
```json
{
"status": "closed",
"changetime": "2016-10-11T16:36:28",
"_ts": "2016-10-11 16:36:28.927111+00:00",
"description": "\"When we plot a graph, it is possible with the cursor to point on a curve to have access to the (x,y) coordinate of a point? I think it is very useful to have a rapid information of the intensity level at a q value.\"\n\n\nArnauld Poulesquen\nCEA Marcoule\nDEN/DTCD/SPDE/L2ED\nLaboratoire de Chimie des Fluides Complexes et d'Irradiation\nB\u00e2t. 37 - BP 17171\nF - 30 207 Bagnols sur C\u00e8ze\nFRANCE\n \nT\u00e9l. : +33 (0)4.66.79.18.01\nFax : +33 (0)4.66.39.78.71\n\n\nResponse by SMK for !SasView Developers, 14/05/13:\n\n\"As far as I know this is not possible at the moment, but I agree it would be a nice feature. The best !SasView can offer is if you right-click on ANY data point and move down the context menu to the name of the data set you are interested in, then follow the chevron to !DataInfo. A window will pop-up and if you scroll down you will get a listing of all the (x,y,e) values for that data set. I will ticket the feature you ask for (but at this juncture I do not know how feasible it may be or how quickly it will be implemented).\"",
"reporter": "username_0",
"cc": "",
"resolution": "worksforme",
"workpackage": "SasView GUI Redesign",
"time": "2013-05-14T09:33:31",
"component": "SasView",
"summary": "Request for (x,y,e) readback from graph on mouse over",
"priority": "minor",
"keywords": "",
"milestone": "SasView 4.1.0",
"owner": "",
"type": "enhancement"
}
```
Answers:
username_0: Trac update at `2013/05/14 15:16:07`: **username_0** changed type from "defect" to "enhancement"
username_1: Trac update at `2013/07/04 13:45:24`: **username_1** commented:
Clicking anywhere on the graph currently puts the coordinates in the status bar area. It would probably be better to have this functionality such that information is printed in the graph window somewhere.
username_2: Trac update at `2013/08/31 01:54:57`: **butler** changed milestone from "" to "WishList"
username_2: Trac update at `2015/01/29 23:57:47`: **butler** changed workpackage from "" to "SasView GUI Redesign"
username_2: Trac update at `2015/08/14 20:21:29`: **butler** changed milestone from "WishList" to "SasView 4.0.0"
username_2: Trac update at `2015/01/31 17:41:19`: **butler** changed priority from "major" to "minor"
username_2: Trac update at `2016/03/20 22:44:22`: **butler** changed milestone from "SasView 4.0.0" to "SasView Next Release +1"
username_2: * **butler** changed milestone from "SasView Next Release +1" to "SasView 4.1.0"
Status: Issue closed
username_2: Trac update at `2016/10/11 16:36:28`:
* **butler** commented:
Closing this ticket as the feature requested does in fact exist. getting x,y, AND z in the 2D plots should be a separate ticket but probably should wait for the new GUI
* **butler** changed resolution from "" to "worksforme"
* **butler** changed status from "new" to "closed" |
JuliaLang/julia | 168181039 | Title: general concatenation (`cat`) of sparse arrays yields dense
Question:
username_0: ```julia
julia> cat((1,2), spdiagm(1:4), spdiagm(1:4))
8×8 Array{Int64,2}:
1 0 0 0 0 0 0 0
0 2 0 0 0 0 0 0
0 0 3 0 0 0 0 0
0 0 0 4 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 2 0 0
0 0 0 0 0 0 3 0
0 0 0 0 0 0 0 4
```
Cross reference #15172 and #16722. Best!<issue_closed>
Status: Issue closed |
Eclmist/Aether | 599561471 | Title: Golem monster is broken.
Question:
username_0: **Describe the bug**
Golem monsters spawn lying down. Audio also seems to be broken.
**Severity:**
**Probability:**
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots (Optional)**
If applicable, add screenshots to help explain your problem.
**Any Other Information (Optional)**
Add any other context about the problem here.
Answers:
username_1: Resolved in pr
Status: Issue closed
|
martin-helmich/typo3-typoscript-lint | 193999317 | Title: Add "paths" option to tslint.yml
Question:
username_0: In addition to the filename argument on CLI a `paths` option within `tslint.yml` would be useful.
This allows for always running the lint command in a generic fashion in various projects whereas each of them provides the list of paths/directories to scan. This is useful in CI environments.<issue_closed>
Status: Issue closed |
triambaka/Hangman | 724687782 | Title: Dynamic list of words
Question:
username_0: The current implementation uses a pre-defined set of words.
Make an API call to any external site, which can give more words for the game.
For example : https://svnweb.freebsd.org/csrg/share/dict/words?view=co&content-type=text/plain
Answers:
username_1: Hey, I would like to add this feature
username_2: please assign me to this issue
Thank you :D
username_0: Sure!
Status: Issue closed
|
cschil/WD-385-initrd | 564090998 | Title: Not working on EX2 Ultra
Question:
username_0: It doesn't seem to work for me on EX2 Ultra. I use Debian Stretch and uImage from Fox-exe, but after loading your uInitrd to /dev/mtdblock2, it doesn't boot anymore. Would you still be able to help, or maybe you ditched your WD already? |
Alexreid95/BasicCalculator | 571527799 | Title: Good use of eval
Question:
username_0: `eval` makes the calculation code a lot simpler! In this specific case it's definitely ok.
However, be aware that eval is [quite dangerous](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval#Never_use_eval!) to use in general code though, especially with user input. Since it executes whatever it is given with full access (just like your own code) a malicious user could potentially do some damage in a "real" app. |
hyperf/hyperf | 697793518 | Title: websocket消息管理器发送bug[BUG]
Question:
username_0: Execute the command and paste the result below.
Command: `uname -a && php -v && composer info | grep hyperf && php --ri swoole`
```bash
# Paste the result here.
```
Linux VM_0_12_centos 3.10.0-1062.18.1.el7.x86_64 #1 SMP Tue Mar 17 23:49:17 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PHP 7.3.19 (cli) (built: Jun 13 2020 16:21:46) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.3.19, Copyright (c) 1998-2018 Zend Technologies
Do not run Composer as root/super user! See https://getcomposer.org/root for details
hyperf/cache v2.0.3 A cache component for hyperf.
hyperf/command v2.0.5 Command for hyperf
hyperf/config v2.0.3 An independent component that provides configuration container.
hyperf/constants v2.0.3 A constants component for hyperf.
hyperf/contract v2.0.3 The contracts of Hyperf.
hyperf/database v2.0.4 A flexible database library.
hyperf/db-connection v2.0.3 A hyperf db connection handler for hyperf/database.
hyperf/devtool v2.0.3 A Devtool for Hyperf.
hyperf/di v2.0.4 A DI for Hyperf.
hyperf/dispatcher v2.0.3 A HTTP Server for Hyperf.
hyperf/event v2.0.3 an event manager that implements PSR-14.
hyperf/exception-handler v2.0.3 Exception handler for hyperf
hyperf/framework v2.0.3 A coroutine framework that focuses on hyperspeed and flexible, specifically use for build microservices and middlewares.
hyperf/guzzle v2.0.3 Swoole coroutine handler for guzzle
hyperf/http-message v2.0.3 microservice framework base on swoole
hyperf/http-server v2.0.4 A HTTP Server for Hyperf.
hyperf/json-rpc v2.0.7 A JSON RPC component for Hyperf RPC Server or Client.
hyperf/load-balancer v2.0.3 A load balancer library for Hyperf.
hyperf/logger v2.0.3 A logger component for hyperf.
hyperf/memory v2.0.3 An independent component that use to operate and manage memory.
hyperf/model-listener v2.0.3 A model listener for Hyperf.
hyperf/paginator v2.0.3 A paginator component for hyperf.
hyperf/pool v2.0.3 An independent universal connection pool component.
hyperf/process v2.0.3 A process component for hyperf.
hyperf/redis v2.0.3 A redis component for hyperf.
hyperf/rpc v2.0.3 A rpc basic library for Hyperf.
hyperf/rpc-client v2.0.7 An abstract rpc server component for Hyperf.
hyperf/rpc-server v2.0.3 An abstract rpc server component for Hyperf.
hyperf/server v2.0.5 A base server library for Hyperf.
hyperf/testing v2.0.3 Testing for hyperf
hyperf/translation v2.0.3 An independent translation component, forked by illuminate/translation.
hyperf/utils v2.0.0 A tools package that could help developer solved the problem quickly.
hyperf/validation v2.0.8 hyperf validation
hyperf/watcher v2.0.6
hyperf/websocket-client v2.0.3 A websocket client library for Hyperf.
hyperf/websocket-server v2.0.5 A websocket server library for Hyperf.
swoole
Swoole => enabled
Author => <NAME> <<EMAIL>>
Version => 4.5.2
Built => Jul 16 2020 10:32:04
coroutine => enabled
epoll => enabled
eventfd => enabled
signalfd => enabled
cpu_affinity => enabled
[Truncated]
#13 /hyperf-skeleton/vendor/hyperf/dispatcher/src/HttpDispatcher.php(40): Hyperf\Dispatcher\HttpRequestHandler->handle(Object(Hyperf\HttpMessage\Server\Request))
#14 /hyperf-skeleton/vendor/hyperf/http-server/src/Server.php(118): Hyperf\Dispatcher\HttpDispatcher->dispatch(Object(Hyperf\HttpMessage\Server\Request), Array, Object(Hyperf\HttpServer\CoreMiddleware))
#15 {main}
[ERROR] Hyperf\HttpMessage\Exception\NotFoundHttpException:Not Found(0) in /hyperf-skeleton/vendor/hyperf/http-server/src/CoreMiddleware.php:173
Stack trace:
#0 /hyperf-skeleton/vendor/hyperf/http-server/src/CoreMiddleware.php(107): Hyperf\HttpServer\CoreMiddleware->handleNotFound(Object(Hyperf\HttpMessage\Server\Request))
#1 /hyperf-skeleton/vendor/hyperf/dispatcher/src/AbstractRequestHandler.php(65): Hyperf\HttpServer\CoreMiddleware->process(Object(Hyperf\HttpMessage\Server\Request), Object(Hyperf\Dispatcher\HttpRequestHandler))
#2 /hyperf-skeleton/vendor/hyperf/dispatcher/src/HttpRequestHandler.php(26): Hyperf\Dispatcher\AbstractRequestHandler->handleRequest(Object(Hyperf\HttpMessage\Server\Request))
#3 /hyperf-skeleton/app/Middleware/CorsMiddleware.php(28): Hyperf\Dispatcher\HttpRequestHandler->handle(Object(Hyperf\HttpMessage\Server\Request))
#4 /hyperf-skeleton/vendor/hyperf/dispatcher/src/AbstractRequestHandler.php(65): App\Middleware\CorsMiddleware->process(Object(Hyperf\HttpMessage\Server\Request), Object(Hyperf\Dispatcher\HttpRequestHandler))
#5 /hyperf-skeleton/vendor/hyperf/dispatcher/src/HttpRequestHandler.php(26): Hyperf\Dispatcher\AbstractRequestHandler->handleRequest(Object(Hyperf\HttpMessage\Server\Request))
#6 /hyperf-skeleton/app/Middleware/DebugMiddleware.php(27): Hyperf\Dispatcher\HttpRequestHandler->handle(Object(Hyperf\HttpMessage\Server\Request))
#7 /hyperf-skeleton/vendor/hyperf/dispatcher/src/AbstractRequestHandler.php(65): App\Middleware\DebugMiddleware->process(Object(Hyperf\HttpMessage\Server\Request), Object(Hyperf\Dispatcher\HttpRequestHandler))
#8 /hyperf-skeleton/vendor/hyperf/dispatcher/src/HttpRequestHandler.php(26): Hyperf\Dispatcher\AbstractRequestHandler->handleRequest(Object(Hyperf\HttpMessage\Server\Request))
#9 /hyperf-skeleton/vendor/hyperf/validation/src/Middleware/ValidationMiddleware.php(81): Hyperf\Dispatcher\HttpRequestHandler->handle(Object(Hyperf\HttpMessage\Server\Request))
#10 /hyperf-skeleton/vendor/hyperf/dispatcher/src/AbstractRequestHandler.php(65): Hyperf\Validation\Middleware\ValidationMiddleware->process(Object(Hyperf\HttpMessage\Server\Request), Object(Hyperf\Dispatcher\HttpRequestHandler))
#11 /hyperf-skeleton/vendor/hyperf/dispatcher/src/HttpRequestHandler.php(26): Hyperf\Dispatcher\AbstractRequestHandler->handleRequest(Object(Hyperf\HttpMessage\Server\Request))
#12 /hyperf-skeleton/vendor/hyperf/dispatcher/src/HttpDispatcher.php(40): Hyperf\Dispatcher\HttpRequestHandler->handle(Object(Hyperf\HttpMessage\Server\Request))
#13 /hyperf-skeleton/vendor/hyperf/http-server/src/Server.php(118): Hyperf\Dispatcher\HttpDispatcher->dispatch(Object(Hyperf\HttpMessage\Server\Request), Array, Object(Hyperf\HttpServer\CoreMiddleware))
#14 {main}
Status: Issue closed
Answers:
username_0: 进程退出错误 |
Geovation/photos | 684433257 | Title: photos disappear form the map
Question:
username_0: sometime the photos disappear from the map. It requires a hard reload.
Status: Issue closed
Answers:
username_0: cannot reproduce....
username_0: sometime the photos disappear from the map. It requires a hard reload.
username_0: I've uploaded 1600 photos. The map showed only 100. After a while just 200.
username_0: it is related to the cache... ignore it for now as not a real problem for any user.
Status: Issue closed
|
TerriaJS/terriajs | 116477182 | Title: Support color maps for enumerated fields
Question:
username_0: We currently don't have any nice way to set specific colours for different enumerated values in CSV files. It would probably be reasonable to do something like:
```
{ "colorMap": [
{ "color": "red",
"value": "fish"
}, {
"color":"blue",
"value": "dog"
}
]
}
```
C.f. `offset` instead of `value` for numeric fields.
Answers:
username_0: Come to think of it, we should allow this for numeric fields too. Currently our colorMaps map to scale of 0 to 1, dependent on whatever the actual values end up being. In some cases it would be better to be able to do:
```
{ "colorMap": [
{ "color": "red",
"value": "0"
}, {
"color":"white",
"value": "50"
}, {
"color":"blue",
"value": "100"
}
]
}
```
Maybe with `min` and `max` properties, too. It will be interesting trying to combine this with color bins. Perhaps it just becomes a "manual' binning method.
username_1: How close is this to https://github.com/TerriaJS/terriajs/issues/1097?
username_0: They're actually quite independent.
* This: allow colours to be pinned to specific values
* #1097: allow each column in a table to have a defined style.
username_2: Is this covered by #1551?
username_0: No. That one only covers boundaries of numeric fields. This one is about enumerated fields.
Possibly they could be unified in this somewhat awkward way:
```
colorBins: 3 // 3 bins, boundaries determined automatically
colorBins: [20, 40] // 3 bins, boundaries manually specified
colorBins: { "dogs": "blue", "cats": "red" } // specific values for enumerated fields
```
username_3: @username_2, @username_0 Are you both happy if I go ahead and implement this `colorBins` object syntax? The only problem I see is that the other 2 versions don't specify colours, they are specified separately, so possibly it should be done like that? It looks like it could be a bit awkward in `LegendHelper.js` if we put the colours in `colorBins`.
How does setting a binning method interact with explicit colour bins?
username_0: Hmm I actually think I like my original syntax better:
```
{ "colorMap": [
{ "color": "red",
"value": "fish"
}, {
"color":"blue",
"value": "dog"
}
]
}
```
Any complications with that?
username_3: I'll go ahead with the `"colorMap"` list of objects syntax.
username_2: sounds good to me!
username_3: I've come back to this issue today, and I think that it would be better to fully separate colouring of scalar and enum columns.
Proposed changes:
- `colorMap` and `colorBins` and `method` only apply to scalar valued columns
- new property `colorCategories` that can be (like `colorBins`) either an integer that tells Terria how many different colour stops to use, or a list of objects that fully describes
This will clean `LegendHelper.js` and all of the weirdness that comes from trying to treat enums like scalar valued columns.
Potential problems:
- The `"cycle"` method. This will presumably still need to be supported for enums (and only for enums? Doesn't seem much use for scalar values)
username_2: That sounds good to me - anything that cleans up the code is welcome. We should keep an eye on backwards compatibility though - eg. if you go with your plan above, add a deprecation warning if `colorMap` etc is used on an `enum` column, and just re-use `colorBins` instead of `colorCategories` for enums?
username_3: I've started making changes to add this feature and clean up the mix of scalar and enum code in `LegendHelper.js`, so far I have #e24b348a4. The way I'm heading, it looks like I'm going to be removing all `indicesIntoUniqueValues` related properties and just doing all colouring and legend creation from the `uniqueValues` themselves. That seems to make sense to me, do you know of any reason to retain all of these indices? The indices are another layer of indirection, are they're making O(n) operations into O(n^2), so removing them ought to speed things up as well.
username_2: Without looking into it in depth, that all sounds good to me. Just check where indicesIntoUniqueValues is used in terriajs, I'd be surprised if anyone else uses it. But add a deprecation warning anyway if you can. Maybe lazy calc it in the case?
Note Cycle is the default in many cases.
username_3: Where is cycle used? Is it used with columns that have a large number of unique values?
username_3: Even with `"cycle"`, the storage and processing required is decreased from the current method, as the `indicesIntoUniqueValues` array takes `O(m)` space and `O(nm)` time to build (where `n = values.length` and `m = uniqueValues.length`), and the `Object` lookup only takes `O(m)` space and `O(m)` time.
username_3: Deprecating `indicesIntoUniqueValues`, `indicesOrValues`, `indicesOrNumericalValues` & `usesIndicesIntoUniqueValues` in `TableColumn`.
username_3: Does explicitly colouring `"cycle"` enum columns make any sense? So, say I have a column whose `uniqueValues` are ['A', 'B', ..., 'Z']. I could use an explicit enum colouring:
```json
"colorBins": [
{ "color": "red",
"value": "A"
}, {
"color":"blue",
"value": "D"
}
]
```
and then any `A` regions/points would be red, and similar for `D`, then just assign `['B', 'C', 'E', ..., 'Z'] alternately to red and blue.
Is it advantageous to have this explicit colouring for cycled enum columns?
username_0: Hmm, there are always so many edge cases!
A very related question is (or a different view of the same question): if explicit colors are provided, how do we color any values that aren't explicitly defined?
There are three obvious possibilities:
- cycle the values provided
- color all the rest some boring color
- color all the rest according to some default scheme
I'm not sure it really makes sense to cycle values when a user has explicitly mapped colors to values - as opposed to, say, just providing a palette (`colorMap`). If you want Liberals to be blue and Labor to be red, do you really want blue or red to be used for any other parties?
Status: Issue closed
|
ReactiveX/RxJava | 66777033 | Title: PublishSubject caches error, won't emit onNext
Question:
username_0: Not sure if this is intended behavior, but when using a PublishSubject (and any other subject) the error seems to be cached, so each time I subscribe to the subject it propagates that error again. Additionally, subsequent calls to onNext are ignored.
PublishSubject<String> subject = PublishSubject.create();
subject.onError(new RuntimeException());
subject.subscribe(
System.out::println,
System.err::println
);
subject.onNext("Hello World!"); // never emitted
If this is intended behavior, how would I go about implementing something that emits errors but will continue normally afterwards?
Answers:
username_1: This is by design. If you need to send multiple error events, you need to wrap the values into a container such as Notification.
Status: Issue closed
username_0: Thanks for the response. I'll look into Notifications.
username_2: @username_1 I understand the reasons behind this, but following your advice would mean unwrapping that container object inside `onNext()` and checking whether or not it is of kind error, then dealing with the error from within that context. That kind of feels strange to me, is there any other way to workaround this?
username_1: If you don't want to wrap onNext events you can always have an Observable<Object> where you wrap exceptions into a distinct class and send it down the stream. On the consumer side, you have to do instanceof checks do distinguish between normal values and the wrapped exceptional values.
username_3: Errors are terminal events and everything shuts down. That is the contract, similar to throwing an exception. If terminal behavior is not wanted, then don't emit an error via 'onError' but instead as a message through 'onNext' since it's not an actual error (a terminal event) if multiple can be sent.
As @username_1 stated, materialize()/ dematerialize() are available to turn terminal events into Notifications.
You can also decouple streams so one terminates and is then restarted every time an error is received by using retry(), like this: https://gist.github.com/username_3/04eef9ca0851f3a5d7bf
username_0: @username_1 doing that doesn't seem much better, since now your type parameter would need to be ```Object``` rather than the expected type. It's a murky solution either way. What would be nice is a way to reset an Observable after it has been terminated, either by onComplete or onError, by just putting it back in its initial state, but keep its subscribers. This can be done manually, but an Rx solution would be handy.
It sounds like this flies in the face of the designed contracts, though.
username_4: Why just not have `onNext` and `onError` methods?
username_1: The Observable protocol allows only at most 1 error signal. This makes it unambiguous when a flow has ended. With multiple errors, there is generally no way to know if there could be more signals or your source crashed for good. Also it would make cold retries difficult: which of the arbitrary number or type of error should trigger a complete retry?
What you may be looking for is multi-value flows which, until Java gets value types, you have to emulate with tuples/record classes.
username_4: @username_1 thanks for the comment, I've actually using _RxSwift_ and was comparing the _Rx_ interface with more usual _Promises_ or _Futures_ interfaces. They look a bit nicer at the call site, as I may want just propagate the error further, but handle `Success` a bit differently. |
mauriciormr/blog | 654384624 | Title: {"title":"a26","description":"Hsjs","coverBlog":" ","coverCEO":" "}
Question:
username_0: cgfgf
Status: Issue closed
Answers:
username_1: No importa de donde lo veas, todo depende del nivel que quieres llegar a tener una vez has hecho lo mejor de ti para sobrellevar las situaciones de tu vida.
En esta ocación quiero comentar un poco de todo lo que estamos viviendo, es posible que no sean las mejores situaciones pero si lo intentamos podemos llegar a conseguir cosas de verdad grandes.
Nuestro comportamiento humano muchas veces no nos permite ir más allá de nuestras perspectivas, debemos de asumir que ++**no somos los mejores pero que podemos intentarlo**++.
A medida que vamos creciendo podemos usar la siguiente tabla;
| Ser o no ser | a |
|-|-|
|content1|content2|
Status: Issue closed
|
hl7ch/ch-emed | 794861368 | Title: cardinalities for ingredients in medication
Question:
username_0: if a GTIN is specified, ingredients are not required according to IPAG, this is
modeled in CDA and FHIR looks here too strong:
ingredient 1..* -> in CDA it is 0..*
does this affect all Medication Profiles?
https://art-decor.org/art-decor/decor-templates--cdachemed-?section=templates&id=2.16.756.5.30.1.1.10.4.34&effectiveDate=2019-12-11T11:31:52&language=en-US
Answers:
username_0: https://docs.google.com/spreadsheets/d/1Ui3NGFE2I8yiOlHELk-B0Pke2l9-Jbe5BTeYOnS8-uE/edit#gid=1086773989
Status: Issue closed
|
dotnet/dotnet-docker | 1047595332 | Title: Breaking Change: Default Debian version set to Debian 11
Question:
username_0: # Breaking Change: Default Debian version set to Debian 11
The default Debian version has been upgraded to Debian 11 (Bullseye) as part of the .NET 6 release. This means that any usage of the `6.0`, `6.0.x`, or `latest` tags when targeting Linux containers will give you an image based on Debian 11.
## Details
This upgrade conforms to our [policy of releasing new major versions of .NET on the latest stable version of Debian](https://github.com/dotnet/dotnet-docker/blob/main/documentation/supported-platforms.md).
Previous .NET versions (.NET Core 3.1, .NET 5) will continue to use Debian 10 (Buster) as the default but [image tags](https://hub.docker.com/_/microsoft-dotnet) are available for Debian 11 for these versions. Container images based on Debian 10 will not be made available for .NET 6.
If you run into unexpected issues as part of migrating from .NET 5 to .NET 6 with the use of Debian, it is recommended to read the [Debian 11 release notes](https://www.debian.org/releases/bullseye/releasenotes) to understand the impact of this upgrade.
## Related Issue
https://github.com/dotnet/dotnet-docker/issues/2576
Answers:
username_1: @username_0 can you please confirm that, even though no docker images are available based on Debian 10 (buster) for .NET 6, Debian 10 is anyways a supported OS for .NET 6 (as stated [here](https://github.com/dotnet/core/blob/main/release-notes/6.0/supported-os.md))?
username_0: @username_1 - That's correct. The OS versions that .NET supports is a superset of the official container images that we provide. If it's listed as an OS version that .NET supports, that includes support within containers.
Status: Issue closed
|
abarson/WePanic-DL | 331280023 | Title: Model post processing
Question:
username_0: we want `run_model.py` to append model stats to a csv at the end of training
Answers:
username_0: COLUMNS: date, model_location, training_accuracy, validation_accuracy, testing_accuracy
---> write run_model parameters to model directory
Status: Issue closed
|
hb5co/drupal-ddp | 65469134 | Title: How do users get inserted into Meteor?
Question:
username_0: /cc @username_1 @username_2
Answers:
username_1: @username_2 can you follow up with @username_0 on this one?
For demo related stuff for our current project, I would think you could have some test users just created on the Meteor side.
username_2: Users are created via Accounts.createUser()
Status: Issue closed
username_2: User accounts are now syncing |
ossrs/srs | 668477605 | Title: Error on emit via Streamlabs Mobile
Question:
username_0: srs | thread [1][5o794n61]: do_cycle() [src/app/srs_app_rtmp_conn.cpp:210][errno=62]
srs | thread [1][5o794n61]: service_cycle() [src/app/srs_app_rtmp_conn.cpp:407][errno=62]
srs | thread [1][5o794n61]: stream_service_cycle() [src/app/srs_app_rtmp_conn.cpp:539][errno=62]
srs | thread [1][5o794n61]: start_fmle_publish() [src/protocol/srs_rtmp_stack.cpp:2763][errno=62]
srs | thread [1][5o794n61]: expect_message() [src/protocol/srs_rtmp_stack.hpp:348][errno=62]
srs | thread [1][5o794n61]: recv_message() [src/protocol/srs_rtmp_stack.cpp:403][errno=62]
srs | thread [1][5o794n61]: recv_interlaced_message() [src/protocol/srs_rtmp_stack.cpp:913][errno=62]
srs | thread [1][5o794n61]: read_basic_header() [src/protocol/srs_rtmp_stack.cpp:1008][errno=62]
srs | thread [1][5o794n61]: grow() [src/protocol/srs_protocol_stream.cpp:179][errno=62]
srs | thread [1][5o794n61]: read() [src/protocol/srs_service_st.cpp:517][errno=62](Timer expired)
Answers:
username_1: Same, no custom config. Only with android StreamLabs client, looks like problem in
[2020-09-02 10:42:52.169][Trace][72296][76b922s3] connect app, tcUrl=rtmp://getty.wtf.org/live, pageUrl=, **swfUrl=rtmp://getty.wtf.org/live,** schema=rtmp, vhost=getty.wtf.org, port=1935, app=live, args=null
username_1: @username_2 old version of app streamlabs ok

gplay version

username_2: Won't fix.
Status: Issue closed
|
geopython/pygeoapi | 584779812 | Title: Support a link to features in current bounding box from the collection /items page
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Data sets are often large and people are sometimes interested in data from only a limited extent of the full data set. Bounding-box filtering is supported, but actually producing one of those bounding-boxes is a problem for a computer more often than a human. As a (human) user, I would like to have a view of data just within a bounding box, and to be able to share that link.
**Describe the solution you'd like**
We could enhance the UX of the HTML view of a collection by including a dynamic (client-generated) link on the `/items` page to the same collection just limited by the current bounding box of the preview map. This link could be updated each time the preview map finishes panning, zooming, or resizing. On a page **with an active bounding box filter**, this dynamic link generation should/would/could be **disabled**. The `canonical` link would also be present in the page `head` to link back to the full collection for (e.g. seach engine) caching.
**Describe alternatives you've considered**
Alternatively, as the map view changes, a new network request could go out for the GeoJSON data that fits into the current extent; the table would update as the map is panned/zoomed/resized. I don't like this option since it implies many redundant requests, but it probably meets someone else's definition of desirable behaviour.
**Additional context**
The link could be added below the map here, and only for the items page, e.g. https://demo.pygeoapi.io/master/collections/obs/items. Since it is dynamically generated and only relevant to a human user at a particular moment, it would not be included in the server-generated `links` part of the response.
 |
rust-lang/rust | 573556937 | Title: Possible regression on use of Self in generics ("mismatched types")
Question:
username_0: The following code ([also on playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=a066ec9dd99d5b365e9b31d75603ac3e)) compiles fine on Stable and Beta `rustc`, but has started no longer compiling on latest nightly (`2020-02-28`, `2020-02-29` both have this issue).
```rust
pub struct Container<V>(pub V);
impl<T, V> core::ops::Mul<T> for Container<V>
where
T: From<V> + core::ops::Mul<T, Output = T>,
{
type Output = Container<T>;
fn mul(self, scalar: T) -> Self::Output {
Self(scalar * self.0.into())
}
}
```
I would expect this code snippet to compile with no errors as it currently does in stable and beta Rust.
Instead, on recent nightlies (02-28 and 02-29 were tested), an error is produced:
```
--> src/lib.rs:10:14
|
3 | impl<T, V> core::ops::Mul<T> for Container<V>
| - - expected type parameter
| |
| found type parameter
...
10 | Self(scalar * self.0.into())
| ^^^^^^^^^^^^^^^^^^^^^^ expected type parameter `V`, found type parameter `T`
|
= note: expected type parameter `V`
found type parameter `T`
= note: a type parameter was expected, but a different one was found; you might be missing a type parameter or trait bound
= note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters
```
For what it's worth, the substitution
```diff
pub struct Container<V>(pub V);
impl<T, V> core::ops::Mul<T> for Container<V>
where
T: From<V> + core::ops::Mul<T, Output = T>,
{
type Output = Container<T>;
fn mul(self, scalar: T) -> Self::Output {
- Self(scalar * self.0.into())
+ Container(scalar * self.0.into())
}
}
```
resolves the error.
I believe this is because `rustc` is erroneously picking `Self` to be `Container<V>` (the type for which the `impl` is being specified) instead of using `Container` and inferring the (formerly correct) output type of `Container<T>`.
### Meta
`rustc --version --verbose`:
```
rustc 1.43.0-nightly (d3c79346a 2020-02-29)
binary: rustc
commit-hash: d3c79346a3e7ddbb5fb417810f226ac5a9209007
commit-date: 2020-02-29
host: x86_64-apple-darwin
release: 1.43.0-nightly
LLVM version: 9.0
```
Answers:
username_0: Sure enough, this seems related to #69306, which was marked resolved by PR #69340. I guess I'm one of the lucky people affected by those changes. 😄
username_1: Yeah, this was caused by a bug fix to type checking of `Self` paths, which would previously wrongly introduce inference variables for type parameters in the impl. Using `Self` should always refer to the `Self` type, just as using `Self {...}` would. In other words, your code should never have compiled.
username_1: Closing as expected breakage and "wontfix".
Status: Issue closed
|
pelias/whosonfirst | 306228217 | Title: Resumable import?
Question:
username_0: If the import process has stopped and i'm re-running it, will it resume? or if i'm running an import and the data already exists, will it skip it or override everything?
Answers:
username_1: Hey @username_0,
There isn't resume functionality in any of our importers currently. Re-starting an importer with the same data will re-start from the beginning. Elasticsearch will handle that without issue. However if you were, say, updating data to a newer copy, you would want to delete the Elasticsearch index and start over.
For the Who's on First importer in particular, resume functionality wouldn't be super helpful, since it doesn't take very long to run. But the other importers do, so some sort of resume functionality would really help. I'm happy to discuss ideas for an implementation of that.
username_0: Well, i'm not that of an expert in nodejs but the easiest solution that comes to mind is to have a `import-state` file which holds the current index count and once the importer starts it will check if we have a `import-state` file and resume from that point. sounds reasonable?
username_1: That definitely sounds reasonable. I suspect for importers that handle multiple files, we would want to include a filename and record number to record how far it's progressed.
username_2: That means that it won't duplicate the data already in elasticsearch, right?
username_1: @username_2. Correct. From the perspective of the query the data will be overwritten with an identical copy. So there won't be duplicate records. However it's slightly less efficient as internally elasticsearch handles overwrites by creating a new record, marking it as version 2, and marking the original, version 1 record as deleted.
So for that reason if you're doing a big import and you need max performance its best to start from a clean index.
username_2: Thanks!
username_0: @username_1 so import state could be a `import-state.json` file which holds:
```
{
import: {
'filename': 1000,
'filename2': 2000
}
}
```
This can support both single/multiple files importers.
username_1: That would be sufficient. I think all of our importers have a deterministic order of processing files, so we could actually get by with storing only one filename at a time, as well as the progress within that file.
We already have several places in the code where a function is called to check into the progress of the import every few seconds (most notably in dbclient), so it is also advantageous if the import-state can be generated from a single moment in time rather than having to look at past progress.
Another important requirement of the implementation would be that the import-state file is removed when the importer completes successfully.
username_0: Not so sure I got your first point, but as if I recall correctly theres an output of `[dbclient] imported X records..` to the console when it finishes a file or a batch, that is the spot that the import-state file should be updated as far as I can see it.
Regarding the import-state removal upon completion, that's a must.
username_1: Lets say you have 10 files of 100 lines each. You could store
```
{
'file1': 100,
'file2': 100,
'file3': 50
}
```
To signify having completed the first two files and being half way through the 3rd. But assuming that the files are always processed in the same order, I think
```
{
'file3': 50
}
```
contains the same amount of information.
The ouput from `dbclient` is the right pattern to follow (I think), but not the right place for this to live. I think each importer would have to handle this, since each one processes files differently.
username_0: Oh, for sure once you finish with a file you don't need to specify it in the import-state file, only the file that is currently being processed (hence 'last touched')
username_1: Yeah. I think the openaddresses importer is the one to best try this out on, since it already has [some code](https://github.com/pelias/openaddresses/blob/master/lib/streams/recordStream.js#L47-L49) to run something periodically.
username_1: While I think it would be possible, I don't think we should do any work to support resumable imports in the near future.
With proper configuration, it's possible to do a complete Elasticsearch build in under 16 hours now, and adding the complexity of resume support would be a significant maintenance going forward.
If someone _really_ wants this, please let us know. But at this time we don't plan to support it.
Status: Issue closed
|
trailofbits/mishegos | 666373683 | Title: Replace Kaitai-generated parser with a handwritten one
Question:
username_0: Per #149: Kaitai is a bit of a mess to install reliably outside of the Docker container.
The cohorts format is simple enough; we should probably just rewrite `mish2jsonl` in C and re-use Mishegos's structure definitions in it.
Answers:
username_1: Regarding the binary format,
Currently we save the path to the .so in each output, wouldn't it be better to:
1. change the path to only include <worker_name>.so
2. promote the text of the worker_name to the header and either leave naming implicit or through an index?
username_0: I think so, yes. If I remember right my original reason for making them u64s was alignment, but most things in mishegos end up packed anyways. Most of those fields can definitely be u32/u16 and possibly even u8.
username_1: If those changes were to go through, would this end up causing any issues with other parts? There are asserts of for example output_slots being 1050 but I'm not sure if that was only there because of the serialization.
username_0: I don't _think_ so -- the only structures and alignments that should matter for serialization are the ones that get dumped directly. The size of the `output_slot` was more about shared memory alignment, but it wasn't particularly well justified either (we should probably trust the compiler).
Status: Issue closed
username_0: #162. |
pybricks/support | 789422718 | Title: Why I receive an error running: "left_motor.run_time(300,15000, then=Stop.COAST, wait=False)"
Question:
username_0: **Question**
The first time my program executes the following statement "left_motor.run_time(300,15000, then=Stop.COAST, wait=False)", everything is fine. Second, I get the following error message:
"Traceback (most recent call last):
File "/home/robot/MasterMind/main.py", line 307, in <module>
File "/home/robot/MasterMind/main.py", line 55, in reset_position
OSError: [Errno 1] EPERM:
The requested operation is not valid in the current state:
--> Check the documentation to for required conditions.
--> Check the line in your script that matches
the line number given in the 'Traceback' above.
----------
Exited with error code 1.
"
Here the whole function:
"
def reset_position(): #porta il carrello in fondo a SX e resetta la posizione
# Azzera posizione pick motor - run_until_stalled(speed, then=Stop.COAST, duty_limit=None)
pick_motor.run_until_stalled(200, then=Stop.COAST, duty_limit=50)
# reduce the motors power
left_motor.control.limits(actuation=25)
right_motor.control.limits(actuation=25)
# run_time(speed, time, then=Stop.HOLD, wait=True)
**55 left_motor.run_time(300,15000, then=Stop.COAST, wait=False)**
right_motor.run_time(300,15000, then=Stop.COAST, wait=False)
# Wait until it is stalled
while not left_motor.control.stalled() and not right_motor.control.stalled():
wait(10)
# Stop the motor (you can also choose brake or hold here)
left_motor.stop()
right_motor.stop()
left_motor.reset_angle(0)
right_motor.reset_angle(0)
# restore the motors power
left_motor.control.limits(actuation=100)
right_motor.control.limits(actuation=100)
# Sposta il carrello tutto a sx
x_motor.run_until_stalled(1200, then=Stop.COAST, duty_limit=50)
x_motor.reset_angle(0)
"
**Context**
At the beginning of the program I call a function for resetting the motors. During the program I need to repeat this function.
**Screenshots**
There is a saying that a picture is worth a 1000 words. Screenshots really help to identify and solve problems.

Answers:
username_1: In the screenshot, it looks like line 55 is `left_motor.control.limits(actuation=25)` rather than `left_motor.run_time(300,15000, then=Stop.COAST, wait=False)`.
`EPREM` happens here if the control is busy doing something (running or holding motors). So you need to be sure that the motors are fully stopped before trying to change the limits.
```python
left_motor.stop() # or brake()
right_motor.stop() # or brake()
left_motor.control.limits(actuation=25)
right_motor.control.limits(actuation=25)
```
username_0: Sorry if I got confused.
I added the instructions you suggested and now everything is ok.
The motors were in holding.
Thanks David.
Status: Issue closed
|
kubernetes/kubernetes | 121217621 | Title: Change release process to automatically build and push hyperkube docker images with alpha/beta/official releases
Question:
username_0: As we move to docker setup being default for local clusters we should make sure that hyperkube is up to date.
@username_1 @quinton-hoole @brendandburns @roberthbailey @mikedanese
Answers:
username_1: Is this a duplicate of #11751?
Status: Issue closed
username_0: @username_1 It's a subset :) For now I mostly care about ```hyperkube```, but longer term we should release everything.
Nevertheless I'm closing this one as a dup and will try to work on fixing #11751 |
faircloth-lab/phyluce | 231002920 | Title: phyluce_probe_easy_lastz error
Question:
username_0: Hi Brant,
Yet another issue cropped up designing probes: phyluce_probe_easy_lastz chokes on both my Mac and the cluster with different error messages (see below). I started the probe design from scratch because the wonky headers got the best of me (and phyluce) and I made sure the input files were cleaned up. This new error is odd, because it ran for me before.
Any help is greatly appreciated!
Thanks,
Dietrich
Mac 10.12.5:
(phyluce) Dietrichs-MBP:bed dgotzek$ phyluce_probe_easy_lastz --target ferVir1+5.temp.probes --query ferVir1+5.temp.probes --identity 50 --coverage 50 --output ferVir1+5.temp.probes-TO-SELF-PROBES.lastz
Started: Wed May 24, 2017 06:58:52
Traceback (most recent call last):
File "/Users/dgotzek/anaconda/envs/phyluce/bin/phyluce_probe_easy_lastz", line 80, in <module>
main()
File "/Users/dgotzek/anaconda/envs/phyluce/bin/phyluce_probe_easy_lastz", line 74, in main
raise IOError(lztstderr)
IOError: /bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `/Users/dgotzek/anaconda/envs/phyluce/bin/lastz /Users/dgotzek/Dropbox (Smithsonian)/MealyBug_UCEs/bed/ferVir1+5.temp.probes[multiple,nameparse=full] /Users/dgotzek/Dropbox (Smithsonian)/MealyBug_UCEs/bed/ferVir1+5.temp.probes[nameparse=full] --strand=both --seed=12of19 --transition --nogfextend --nochain --gap=400,30 --xdrop=910 --ydrop=8370 --hspthresh=3000 --gappedthresh=3000 --noentropy --coverage=50.0 --identity=50.0 --output=/Users/dgotzek/Dropbox (Smithsonian)/MealyBug_UCEs/bed/ferVir1+5.temp.probes-TO-SELF-PROBES.lastz --format=general-:score,name1,strand1,zstart1,end1,length1,name2,strand2,zstart2,end2,length2,diff,cigar,identity,continuity'
Linux (64-bit CentOS 6.5) cluster:
[dgotzek@n15 bed]$ phyluce_probe_easy_lastz --target ferVir1+5.temp.probes --query ferVir1+5.temp.probes --identity 50 --coverage 50 --output ferVir1+5.temp.probes-TO-SELF-PROBES.lastz
Started: Wed May 24, 2017 06:59:18
Traceback (most recent call last):
File "/usr/local/apps/phyluce/1.5.0/bin/phyluce_probe_easy_lastz", line 80, in <module>
main()
File "/usr/local/apps/phyluce/1.5.0/bin/phyluce_probe_easy_lastz", line 71, in main
alignment = lastz.Align(args.target, args.query, args.coverage, args.identity, args.output, args.min_match)
File "/usr/local/apps/phyluce/1.5.0/lib/python2.7/site-packages/phyluce/lastz.py", line 78, in __init__
--format=general-:score,name1,strand1,zstart1,end1,length1,name2,strand2,zstart2,end2,length2,diff,cigar,identity,continuity'.format(target, query, coverage, identity, self.output, get_user_path("lastz", "lastz"))
File "/usr/local/apps/phyluce/1.5.0/lib/python2.7/site-packages/phyluce/pth.py", line 27, in get_user_path
pth = config.get(program, binary)
File "/usr/local/apps/anaconda/2.2.0/lib/python2.7/ConfigParser.py", line 607, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'lastz'
Answers:
username_0: Weird, the path to dropbox never caused an issue for other programs. But it did the trick this time, thanks.
As for the missing config file for the server installation, I've let the gacrc folks know.
Cheers,
Dietrich
username_1: cool. closing the issue for now.
Status: Issue closed
|
flekschas/d3-list-graph | 178503973 | Title: When zooming out the full container width should be used
Question:
username_0: Currently, the zoom-out merely adjust the `viewBox` of the main `svg` container. This is a pretty simple method but doesn't utilize the whole width of the container. Nodes could be drawn further away from each other and edge bundling could come nicely into play to should hubs for example. |
samvera/hydra-head | 232404856 | Title: Solr schema should support DateRangeField
Question:
username_0: See: https://lucidworks.com/2016/02/13/solrs-daterangefield-perform/
Answers:
username_1: @username_0 What's the use case that would benefit by using a DateRangeField?
username_0: I don't think any argument has to be made that Hydra's schema should include as many solr field types as possible, but the reasons for this one are particularly compelling.
Firstly, the article above points out that the new field performs better for standard queries than the old one: `DateRangeField out-performed both TrieDate and DocValues in general.` So basically every use case would benefit.
But more specifically, lease/Embargo logic, amongst any other representations of actual date ranges (life of author, years of publication, etc.). The main value would be that we can use newer query syntax like INTERSECTS or CONTAINS with the date range that are not possible w/ any number of regular date fields or date field values. E.g.:
```
q={!field f=dateRange op=Contains v="[1999 TO 2001]"}
```
Unlike regular date queries, which require exact matches or date math applied to field (e.g. `/DAY`), [DateRangeField supports a concise truncated date format](https://cwiki.apache.org/confluence/display/solr/Working+with+Dates):
```
2000-11 – The entire month of November, 2000.
2000-11T13 – Likewise but for an hour of the day (1300 to before 1400, i.e. 1pm to 2pm).
-0009 – The year 10 BC. A 0 in the year position is 0 AD, and is also considered 1 BC.
[2000-11-01 TO 2014-12-01] – The specified date range at a day resolution.
[2014 TO 2014-12-01] – From the start of 2014 till the end of the first day of December.
[* TO 2014-12-01] – From the earliest representable time thru till the end of the day on 2014-12-01.
```
Lastly, at this layer, the intent should be to expose (or avoid concealing) as much solr functionality as possible. After all, we didn't ask "What's the use case indexed but unstored field_type_X values vs. indexed and stored field_type_X values?" Individual use cases would exist at higher levels.
username_1: We don't represent leases or embargoes as a range, so I didn't see that as a use case that would benefit.
username_0: Right, we would be able to combine both with a range field and simplify our modeling, terminology and querying. Since multiple values are supported, we could also have a decent logical representation of multiple periods of visibility ("Each July for the next 10 years"), which would otherwise be unintelligible ("10 Embargoes and 10 Leases with no relationship to each other").
Status: Issue closed
|
mgm-interns/teamradio2-frontend | 321462107 | Title: Handle special cases for YouTube video when add a new song
Question:
username_0: ### Procedure:
- Join in a station
- Search a song
- There are some cases occurred:
- A song has UMG license "_Hello Vietnam - Bonjour Vietnam (with lyric) - <NAME>_" cannot be played on the Heroku domain but it can on other domains.
- A song is blocked by the owner and it just can be played on YouTube. In this case, a user cannot add this song to the list and the page should show a notification to warn users.
- A song is removed from YouTube. In the old version, it will show the "Not Found" icon to warn for this case and a user cannot add it to the list |
spatools/grunt-html-build | 83971765 | Title: including multiple src and dest files
Question:
username_0: I want to do something like ::
htmlbuild: {
emp:{
src: ['app/loggedout/index.html','app/loggedout/signin.html'],
dest: ['index.html','signin.html'],
options:{
sections:{
layout:{
header: 'header.html',
footer: 'footer.html'
}
}
}
}
},
means i want to attach the same header and footer to signin and index.html , but i guess this is not the correct format .. how can i achieve this ?<issue_closed>
Status: Issue closed |
j-gaba/CSSWENG-Group-6-S12 | 989328937 | Title: [US08: Add Product] Newly added product is displayed at the last row of the table
Question:
username_0: **Defect ID # 017**
**Build/Platform:** Google Chrome
**Description:**
Newly added products are being displayed at the bottom of the table. This may give the user a long time to confirm whether their product is truly added in the inventory if there are many products in the system.
**Steps to Reproduce:**
1. Enter "landercua" as the username.
2. Enter "<PASSWORD>" as the password.
3. Click the Login button.
4. Click Product Inventory.
5. Click Add Product.
6. Enter "09/06/2021" as the Date of Purchase.
7. Select "Chevron Philippines" as the Supplier.
8. Enter "1234" as the Quantity.
9. Select "Diesel" as the Product.
10. Enter "12" as the Buying Price/Liter.
11. Enter "Manila" as the Stock Location.
12. Click Add Product.
**Actual Results:**
The newly added product is displayed at the last row of the table.
**Expected Results:**
The newly added product must be displayed at the first row of the table.
<issue_closed>
Status: Issue closed |
cdhart/cdhart-html | 989354907 | Title: colorschemez September 02 2021 at 05:23PM
Question:
username_0: <blockquote class="twitter-tweet">
<p lang="en" dir="ltr" xml:lang="en">coriaceous reddish pink hyaloid browny green truthless lightblue https://t.co/sppGYRovQC</p>
— colorschemer (@colorschemez) <a href="https://twitter.com/colorschemez/status/1433556280828858383">Sep 2, 2021</a>
</blockquote>
<br>
<br>
September 02, 2021 at 05:23PM<br>
via Twitter |
formio/formio.js | 430374293 | Title: CalculateValue not working on component in side the edit grid.
Question:
username_0: When write validation and calculated value on the edit grid template , the validation fires correctly on each row separately but calculated value not working at all even on first row of the edit grid. |
docker/docker-py | 563109764 | Title: Flaky test: AttachContainerTest.test_attach_no_stream
Question:
username_0: We've seen this test fail in moby CI (disabled it at some point https://github.com/moby/moby/pull/39848, but it looks to be still flaky after re-enabling it recently when we updated to the latest docker-py version https://github.com/moby/moby/pull/40467)
```
[2020-02-10T23:40:44.429Z] =================================== FAILURES ===================================
[2020-02-10T23:40:44.429Z] __________________ AttachContainerTest.test_attach_no_stream ___________________
[2020-02-10T23:40:44.429Z] tests/integration/api_container_test.py:1250: in test_attach_no_stream
[2020-02-10T23:40:44.429Z] assert output == 'hello\n'.encode(encoding='ascii')
[2020-02-10T23:40:44.429Z] E AssertionError: assert b'' == b'hello\n'
[2020-02-10T23:40:44.429Z] E Right contains more items, first extra item: 104
[2020-02-10T23:40:44.429Z] E Use -v to get the full diff
[2020-02-10T23:40:44.429Z] ------- generated xml file: /src/bundles/test-docker-py/junit-report.xml -------
````
Perhaps this needs a change similar to https://github.com/docker/docker-py/commit/ef043559c4bbd3d1fbc06277160c253fab6df879 (https://github.com/docker/docker-py/pull/2307) |
oxigraph/oxigraph | 680169027 | Title: Implements a SPARQL query results JSON parser
Question:
username_0: The [SPARQL query results JSON format](https://www.w3.org/TR/sparql11-results-json/) is currently write only in Oxigraph. It would be nice to have a parser, maybe based on a streaming JSON parser library. It should be fairly similar to the XML parser written in `lib/src/sparql/xml_results.rs `.<issue_closed>
Status: Issue closed |
cyberark/secretless-broker | 351064316 | Title: The 'Learn More' tag has been restored
Question:
username_0: During one of the pushes, the styling for the docs bar has been dropped

Answers:
username_0: @username_1 was this purposeful or do we want a styled banner?
Status: Issue closed
|
rust-lang/rust | 755541273 | Title: "expected X found X" diagnostic for lifetime mismatch in method signature of trait impl
Question:
username_0: <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
With this code:
```rust
struct Foo(u32);
impl From<&Foo> for &u32 {
fn from(foo: &Foo) -> &u32 {
&foo.0
}
}
```
The error message is:
```rust
error: `impl` item signature doesn't match `trait` item signature
--> src/lib.rs:4:5
|
4 | fn from(foo: &Foo) -> &u32 {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ found `fn(&Foo) -> &u32`
|
= note: expected `fn(&Foo) -> &u32`
found `fn(&Foo) -> &u32`
= help: the lifetime requirements from the `impl` do not correspond to the requirements in the `trait`
= help: verify the lifetime relationships in the `trait` and `impl` between the `self` argument, the other inputs and its output
```
Now, the two `help` lines do give relevant information on how to fix this, which is to have a single named lifetime and use it everywhere:
```rust
struct Foo(u32);
impl<'a> From<&'a Foo> for &'a u32 {
fn from(foo: &'a Foo) -> &'a u32 {
&foo.0
}
}
```
However the part of the error that is more visible at first glance (at least to me) is:
```rust
= note: expected `fn(&Foo) -> &u32`
found `fn(&Foo) -> &u32`
```
My internal reaction was: if what was expected and found is the same thing, what’s the problem?
The two signature are in fact not the same because they each have their own implicit lifetime parameter which don’t necessarily match each other. Would it make sense to make those parameters explicit? Signatures in diagnostics don’t need to be entirely valid Rust syntax. Something like:
```rust
= note: expected `fn(&'_1 Foo) -> &'_1 u32`
found `fn(&'_2 Foo) -> &'_2 u32`
```
### Meta
The output is the same in 1.48.0 and nightly-2020-11-30.
Answers:
username_1: Related: https://github.com/rust-lang/rust/issues/76353, https://github.com/rust-lang/rust/issues/41078 |
kangwonlee/test_ipynb_pycpp | 368451940 | Title: Side effect
Question:
username_0: * [`test_build_markdown_cpp_cell.py `](https://github.com/username_0/test_ipynb_pycpp/blob/master/test_build_markdown_cpp_cell.py) generates extra files.
* It is because `build_markdown_cpp_cell()` changed its behavior.
* Need to verify further
Answers:
username_0: * a32c583e183fedfd4a604c37c84cf35216634191 resolves the issue
Status: Issue closed
|
MycroftAI/mycroft-core | 156872679 | Title: Echo cancelation in software
Question:
username_0: In order for the "Mycroft stop" voice command to function while playing audio or TTS, he needs to be able to hear himself over his own speaker. We have been playing with some pulse audio plugins that might do the job, but the unfortunate facts are:
1. We have to way to manipulate pulse audio streams, sinks, and sources.
2. If I remember correctly, portaudio doesn't play nice with the pulse microphone source.
3. This might cause a lot of overhead.
4. Cross platform compatibility issues.
Answers:
username_1: We'll narrow the scope of this to pertaining to the unit only (unless we can do this in a settings-respecting way on the desktop).
username_2: ```
From there you have to open `pavucontrol` and select `Built in analog stereo (echo cancellation ....)` for all your `Playback` sources - you'll have to catch mycroft while he's speaking on the `Playback` tab - weather is good for this - and on the `Recording` you'll also have to set the source.
My biggest gripe about echo cancellation is it really lowers the audio levels. I'm not sure what it does on a low power device like a pi. The vu meter doesn't go up very high in comparison. I'd say it's about 50%~75% of the volume vs without the echo cancellation.
To test how good of a job echo cancellation was doing I opened up audacity and recorded a track from the mic. Overall ... pretty dang good.
Erm
username_2: I guess I should also mention I had some music playing and being piped through echo cancellation during the process as well.
username_3: I have had good results with this as well. It can be somewhat automated:
```sh
pactl load-module module-echo-cancel source_name=echosource sink_name=echosink
pactl set-default-sink echosink
pactl set-default-source echosource
```
These lines can be run in the terminal and will create and select the echo cancelled source and sink. The lines could also be included in the pulsaudio configuration.
username_0: I wonder if we couldn't create a utility for this?
username_4: Hi! I would like to create this utility. I am a newbie when it comes to open source, could you guide me?
username_5: Closing in favor of #1478
Status: Issue closed
|
dmlc/dgl | 695625007 | Title: edge_softmax function
Question:
username_0: ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. Just run the demo codes in dgl.ops.edge_softmax
1. run the demo: "edge_softmax(g, edata[:4], th.Tensor([0,1,2,3]))"
1. Got the error: "gidx = gidx.edge_subgraph(eids.type(gidx.dtype), True)
ValueError: invalid type: 'int64'"
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
eids.type should accept the type from gidx.dtype, where gidx.dtype is a string and eids.type() can only accept objects like torch.int32. So in order to correctly create the subgraph here, edge_subgraph may accept the tensor-typed index? Look forward to your reply. Thx!
## Environment
- DGL Version: latest
- PyTorch: 1.6.0
- OS (e.g., Linux): windows
- How you installed DGL (`conda`, `pip`, source): conda
- Build command you used (if compiling from source):
- Python version:3.8.5
Answers:
username_1: @username_0 , thanks for reporting this bug, will fix it soon.
Status: Issue closed
|
abclinuxu/abclinuxu | 880660560 | Title: Spatne linky ve skupinach (Bugzilla Bug 1132)
Question:
username_0: This issue was created automatically with bugzilla2github
# Bugzilla Bug 1132
Date: 2008-09-12 14:50:46 -0400
From: stps <<<EMAIL>>>
To: <NAME> <<<EMAIL>>>
Last updated: 2008-09-12 14:57:42 -0400
## Comment 3730
Date: 2008-09-12 14:50:46 -0400
From: stps <<<EMAIL>>>
Link na "dalsi/predchozi skupiny" pod seznamem skupin je spatne smerovany na bazar.
## Comment 3731
Date: 2008-09-12 14:57:42 -0400
From: <NAME> <<<EMAIL>>>
Opraveno v CVS, za chvíli nasadím.<issue_closed>
Status: Issue closed |
baidu/amis | 615752188 | Title: form表单问题
Question:
username_0: form当中嵌套了一个form。我将内部的form按钮进行了隐藏,并且没有设置api(内部form中包含了一个json editor。因为直接在tab下写json editor,你们的框架会报错)。只有最外层的form设置了api。现在有个问题是,我每次变更editor的时候,都会去发送api的请求。我尝试在外部和内部两个form上添加submitOnChange都无法解决。
{
"type": "form",
"api": "post:http://10.243.238.44:8000/fed/pushCode",
"initApi": "get:http://10.243.238.44:8000/fed/tagDetail",
"submitOnChange": false,
"title": "",
"controls": [
{
"type": "grid",
"columns": [
{
"label": "预览",
"type": "button",
"level": "dark",
"actionType": "close",
"tpl": "sm-1",
"sm": 2,
"inline": false,
"className": "review"
},
{
"label": "保存",
"type": "button",
"level": "dark",
"actionType": "dialog",
"dialog": {
"title": "弹框",
"body": "这是个简单的弹框。"
},
"tpl": "sm-1",
"sm": 4,
"inline": false
}
]
},
{
"type": "tabs",
"tabs":[
{
"title": "json编辑器",
"body": {
"type": "form",
"wrapWithPanel": false,
"className": "json_superman",
"submitOnChange": false,
"title": "",
"controls": [
{
"name": "json",
"type": "editor",
"language": "json"
},
{
"type": "tpl",
"tpl": "<%=data.json %>"
}
]
}
},
],
},
],
"md": 4,
"height": 500
}
Answers:
username_1: 已修复,使用 1.0.11-beta.38版本 应该没这个问题
不过我觉得你可能并不需要在form里面放form,fieldSet也许就满足你这个需求了😈
username_0: 好的,我试试
username_1: 有问题再开,先关了
Status: Issue closed
username_0: 你说的fieldSet都是要基于外层有个form的吧,我试过了。提示找不到渲染器。因为我的form表单里面有一个tab控件,tab控件下面放fieldSet
username_0: OK,可以了tab有个controls属性 解决我的问题了 |
wso2/kubernetes-ei | 315564723 | Title: Refine Docker resources for Analytics profile
Question:
username_0: **Description:**
The existing Dockerfile and related resources for building the Docker image of WSO2 Enterprise Integrator's Analytics profile need refinements.
The existing Dockerfiles use a complex set of bash scripts. Based on our experience and the feedback received from our users we found that it would be much better to have plain Dockerfiles for building WSO2 Docker images than incorporating such features.
**Suggested Labels:**
Type/Improvement
**Suggested Assignees:**
username_0
**Affected Product Version:**
WSO2 Enterprise Integrator v6.2.x
**Parent Issues:**
https://github.com/wso2/kubernetes-ei/issues/25
Status: Issue closed
Answers:
username_0: The issue is fixed in above referenced PRs. |
centreon/centreon | 261255960 | Title: REST API : DELTEMPLATE is not producing the same result as through the web interface
Question:
username_0: ---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Centreon Web version: 2.8.9-19.el7
Centreon Engine version: 1.7.2-3.el7
Centreon Broker version: 3.0.7-1.el7
OS : Red Hat Enterprise Linux Server 7.3 (Maipo)
Additional environment details (AWS, VirtualBox, physical, etc.): environment=ESX
Steps to reproduce the issue:
1. Remove a template from a host in the web interface ==> the services related to this template ARE REMOVED ALSO
2. Do the same operation through REST API (commands below) ==> the services related to this template ARE STILL RUNNING (enjoy then clicking in web interface to remove them !)
curl --request POST \
--url 'http://192.168.42.245/centreon/api/index.php?action=authenticate' \
--header 'cache-control: no-cache' \
--header 'content-type: multipart/form-data' \
--form username=admin \
--form password=<PASSWORD>
curl 'http://192.168.42.245/centreon/api/index.php?action=action&object=centreon_clapi' \
-H 'Content-Type: application/json' \
-H 'centreon-auth-token:<PASSWORD>XXXXXXXXXXX' \
--data '{
"action":"DELTEMPLATE",
"object":"HOST",
"values": "MyHost;MyTemplate"}'
PS : APPLYTPL doesn't switch off the services when you run it after DELTEMPLATE. |
teamcapybara/capybara | 376473219 | Title: Cannot click_on a link or button with aria-label in a Rails app
Question:
username_0: Capybara won’t find and click on a button or a link with aria-label, even with `Capybara.enable_aria_label = true`
## Meta
Capybara Version:
```Using capybara 3.10.0```
Driver Information (and browser if relevant):
```driven_by :selenium, using: :chrome, screen_size: [1400, 1400]```
## Failing test script
```ruby
# frozen_string_literal: true
require 'bundler/inline'
gemfile(true) do
source 'https://rubygems.org'
git_source(:github) { |repo| "https://github.com/#{repo}.git" }
# Activate the gem you are reporting the issue against.
gem 'capybara'
gem 'rails', '5.2.0'
gem 'webdrivers'
end
require 'rack/test'
require 'action_controller/railtie'
class TestApp < Rails::Application
config.root = __dir__
config.session_store :cookie_store, key: 'cookie_store_key'
secrets.secret_key_base = 'secret_key_base'
config.logger = Logger.new($stdout)
Rails.logger = config.logger
routes.draw do
get '/' => 'test#index'
end
end
class TestController < ActionController::Base
include Rails.application.routes.url_helpers
def index
render inline: <<~HTML
<a href="#" aria-label="Link">icon</a>
<button aria-label="Button">icon</button>
HTML
end
end
Capybara.server = :webrick
Capybara.enable_aria_label = true
require 'minitest/autorun'
class BugTest < ActionDispatch::SystemTestCase
[Truncated]
click_on 'Button'
end
private
def app
Rails.application
end
end
=begin
Error:
BugTest#test_click_on_link_by_aria_label:
Capybara::ElementNotFound: Unable to find link or button "Link"
Error:
BugTest#test_click_on_button_by_aria_label:
Capybara::ElementNotFound: Unable to find link or button "Button"
=end
```
Answers:
username_1: Looks like `aria-label` isn't support in the :link_or_button_selector (which is used by `click_on`) -- it will work if you use `click_link` or `click_button` respectively
username_1: Hmmm -- although I have no idea why -- it's just combining the separate `link` and `button` selectors so it should be supported
username_1: Ok -- I believe this is fixed in the `link_or_button_aria` branch -- waiting for tests to pass
username_1: @username_0 You're welcome -- thank you for the clear issue report with code to replicate -- makes things MUCH easier 👍 |
rmosolgo/graphql-ruby | 327451030 | Title: New API: Mutation doesn't resolve if no arguments declared in mutation class
Question:
username_0: @username_1 Hey, this used to work in the old setup but not with new class based API (or may be I am doing something wrong).
Consider this mutation, with no arguments supplied:
```rb
class FooMutation < Base::Mutation
graphql_name 'Foo'
description 'returns foo'
field :foo, String, null: false
def resolve
"bar"
end
end
```
Failing code:
```rb
# field.rb
puts "method", @method_sym
puts "kwargs", ruby_kwargs.inspect
if ruby_kwargs.any?
obj.public_send(@method_sym, **ruby_kwargs)
else
# seems to work if we pass an empty hash but then it fails on query calls
obj.public_send(@method_sym)
end
```
```rb
# relay_classic_mutation.rb
def resolve_mutation(kwargs)
# This is handled by Relay::Mutation::Resolve, a bit hacky, but here we are.
puts "resolve", kwargs.inspect
kwargs.delete(:client_mutation_id)
resolve(**kwargs)
end
```
Stacktrace:
```bash
ArgumentError (wrong number of arguments (given 0, expected 1)):
api :
api : /Users/admin/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/graphql-1.8.0/lib/graphql/schema/relay_classic_mutation.rb:30:in `resolve_mutation'
api : /Users/admin/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/graphql-1.8.0/lib/graphql/schema/field.rb:367:in `public_send'
api : /Users/admin/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/graphql-1.8.0/lib/graphql/schema/field.rb:367:in `public_send_field'
api : /Users/admin/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/graphql-1.8.0/lib/graphql/schema/field.rb:288:in `resolve_field'
api : /Users/admin/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/graphql-batch-0.3.9/lib/graphql/batch/setup.rb:19:in `call'
```
Answers:
username_0: Ahh yes, that's probably it. Shall I make a PR? I am not very familiar with the codebase though 😄 .
username_0: I guess we can just pass a default param here:
```rb
def resolve_mutation(kwargs = {})
# This is handled by Relay::Mutation::Resolve, a bit hacky, but here we are.
kwargs.delete(:client_mutation_id)
resolve(**kwargs)
end
```
What do you think?
username_0: Because it seems when there is no argument it's passing no argument to `resolve_mutation` - `ArgumentError (wrong number of arguments (given 0, expected 1)):`
Status: Issue closed
|
pangeo-data/pangeo-cloud-federation | 736890903 | Title: AWS deploy failing
Question:
username_0: Started with my migration away from prometheus operator: https://github.com/pangeo-data/pangeo-cloud-federation/actions/runs/345529754
I'll look into it today.
Answers:
username_0: Seeing this in the logs while deploying
```
Error syncing load balancer: failed to ensure load balancer: LoadBalancerIP cannot be specified for AWS ELB
```
This is from the ingress-nginx specifed in `icesat2/secrets/staging.yaml` that's used for grafana ingress. I see some references to this in, e.g. https://github.com/kubernetes/cloud-provider-aws/issues/48, but I can't seem to find a solution. @salvis2 does anything come to you immediately? If not then I'll keep digging into it.
username_0: https://github.com/pangeo-data/pangeo-cloud-federation/runs/1364947717?check_suite_focus=true passed with the loadBalancerIP for grafana unset.
I'll see if I can figure out how to get a static IP to point to.
username_1: Is this why I keep getting `This site can’t be reached`?
username_2: @username_1 are you trying to access the grafana monitoring page? that is what this is referring to, I think the aws hub itself is running fin
@username_0 we used to be able to see hub metrics at https://grafana.staging.aws-uswest2-binder.pangeo.io/ , which no longer works I believe due to the changes linked above. Would be great to have that back up so that we can investigate activity and costs over the last two weeks https://github.com/pangeo-data/jupyterhub-monitoring
username_0: I think I had some trouble with how AWS does ingress. Some issue with assigning a fixed IP to an ELB, so that we can point the DNS entry to it. I don’t believe I was able to resolve it.
Sebastian had a comment in the AWS secrets file about issues with getting the IP. I don’t recall the details though.
People with access to the kubernetes cluster can access grafana still with `kubectl port-forward` to the ingress service. IIRC grafana uses port 9000.
>
username_2: Ok, thanks for the tip! I'll check it out
username_1: No I guess this is separate. I experienced frequent `This site can’t be reached` messages earlier in the week. I'll make a separate issue if the problem persists. Thanks!
username_2: closed by #895
Status: Issue closed
|
KamasamaK/vscode-cfml | 310155800 | Title: Syntax highlight issue in closures
Question:
username_0: Using a try catch statement in a closure seems to cause the syntax highlighting to become a bit wonky. A code example that will cause it is:
```cfm
<cfscript>
data = [1, 2, 3, 4];
data.each(function(item, index) {
var errors = [];
var flag = false;
try {
flag = true;
} catch (any e) {
flag = false;
}
if (flag) {
error.append({'index': index});
}
});
</cfscript>
```
Example of how it looks for me:

I'm using vscode-cfml version 0.3.1 and vscode version 1.21.1
Answers:
username_1: This is fixed in 0.4.0
Status: Issue closed
|
microsoft/playwright | 1168694858 | Title: [Feature] add new option on junit reporter to use CDATA sections to encode content
Question:
username_0: Provide a new option on the junit reporter to control whether to use CDATA sections to encode text content in the JUnit XML elements.
This provides a safe mechanism to add content on elements, such as `<system-out>` and other that use inner text, where we don't control what the users provide. Parsers usually skip (i.e., don't pase) the contents of CDATA sections, taking out the start and ending CDATA sections delimiters.
Answers:
username_1: Would you be open to sending a PR for this?
username_0: @username_1 on its way, resolving a conflict
username_0: ready for review @username_1
(note: I'm just waiting from a formal email from my org so I can sign the CLA; should come soon)
username_1: Assigning to Yury who is reviewing this PR. |
dotnet/runtime | 800841273 | Title: Missing unicode/uclean.h in mono M1 runtime build
Question:
username_0: When building for `-os osx -arch arm64` on the M1, we'll get this runtime build error.
```
In file included from /Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_calendarData.c:8:
In file included from /Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_locale_internal.h:6:
/Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_icushim_internal.h:22:10: fatal error: 'unicode/uclean.h' file not found
#include <unicode/uclean.h>
^~~~~~~~~~~~~~~~~~
1 error generated.
make[2]: *** [mono/mini/CMakeFiles/monosgen-objects.dir/Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_calendarData.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
In file included from /Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_casing.c:9:
/Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_icushim_internal.h:22:10: fatal error: 'unicode/uclean.h' file not found
#include <unicode/uclean.h>
^~~~~~~~~~~~~~~~~~
1 error generated.
make[2]: *** [mono/mini/CMakeFiles/monosgen-objects.dir/Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_casing.c.o] Error 1
In file included from /Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_collation.c:11:
In file included from /Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_errors_internal.h:6:
/Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_icushim_internal.h:22:10: fatal error: 'unicode/uclean.h' file not found
#include <unicode/uclean.h>
^~~~~~~~~~~~~~~~~~
1 error generated.
make[2]: *** [mono/mini/CMakeFiles/monosgen-objects.dir/Users/dotnet-dev/dev/runtime/src/libraries/Native/Unix/System.Globalization.Native/pal_collation.c.o] Error 1
make[1]: *** [mono/mini/CMakeFiles/monosgen-objects.dir/all] Error 2
```<issue_closed>
Status: Issue closed |
keeleinstituut/ekilex | 1059181157 | Title: `wordValue` vs `wordValuePrese`
Question:
username_0: ```
"wordId": 237885,
"wordValue": "suuline",
"wordValuePrese": "suuline",
```
Mis vahe on kahel väärtusel; mida peaks `Prese` märkima?
Answers:
username_1: Prese on "esitluskuju" ja võib sisaldada <eki-...> märgendust. Formatiiv (laias laastus käändelõpp) on roheliseks värvitud, tsitaatsõnad kursiivis, vene keele rohulised vokaalid poolpaksud, B12-vitamiin on alaindeksiga jne. Prese on ka muudel tekstielementidel, seal on muidki kasutusviise.
Value ei sisalda märgendust ja on sobiv programmidele otsimiseks, järjestamiseks jne.
Status: Issue closed
|
libp2p/rust-libp2p | 322592772 | Title: Compile issue error: paths in `use` groups are experimental (see issue #44494)
Question:
username_0: I have try to compile the lates master
I get the following issue
error: paths in `use` groups are experimental (see issue #44494)
Maybe it is a compiler version issue.
What is the compiler version used.
See the script dump belove including the compiler version and the OS version.
```
Compiling tokio-tcp v0.1.0
Compiling ring v0.12.1 (https://github.com/briansmith/ring?rev=3a14ef619559f7d4b69e2286d49c833409eef34a#3a14ef61)
Compiling cid v0.2.3
Compiling varint v0.1.0 (file:///home/cbr/work/hashwave/rust-libp2p/varint-rs)
warning: missing documentation for a variant
--> varint-rs/src/lib.rs:46:5
|
46 | / error_chain! {
47 | | errors {
48 | | ParseError {
49 | | description("error parsing varint")
... |
60 | | }
61 | | }
| |_____^
|
note: lint level defined here
--> varint-rs/src/lib.rs:21:9
|
21 | #![warn(missing_docs)]
| ^^^^^^^^^^^^
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
warning: missing documentation for a variant
--> varint-rs/src/lib.rs:46:5
|
46 | / error_chain! {
47 | | errors {
48 | | ParseError {
49 | | description("error parsing varint")
... |
60 | | }
61 | | }
| |_____^
|
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
warning: missing documentation for a method
--> varint-rs/src/lib.rs:285:5
|
285 | pub fn source(&self) -> &T {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
warning: missing documentation for a struct
--> varint-rs/src/lib.rs:408:1
|
408 | pub struct VarintCodec<W> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^
[Truncated]
rumskib:~/work/hashwave/rust-libp2p> rustc --version
rustc 1.24.1
rumskib:~/work/hashwave/rust-libp2p> cargo --version
cargo 0.25.0
rumskib:~/work/hashwave/rust-libp2p> lsbv
lsb_release lsblk
rumskib:~/work/hashwave/rust-libp2p> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 17.10
Release: 17.10
Codename: artful
```
Answers:
username_1: You compiler is outdated! We require at least Rust 1.25, and soon 1.26.
If you have `rustup` installed, running `rustup update` should fix it.
Status: Issue closed
username_0: Thanks! |
wladich/nakarte | 329316609 | Title: Wrong JNX files projection
Question:
username_0: The generated JNX maps have a Web Mercator projection instead of the Lat/Lon (2D geographic) projection specified in the "standard" (and used by Garmin). This leads, by definition, to some display distortions/inaccuracies, for example see the attached screenshot from GPXSee:

If you are not able to correctly re-render the tiles, at least specifying the projection in the JNX file could help so 3rd party SW tools can render the maps correctly. My suggestion would be to add the EPSG code (3857 in case of WGS84/Web Mercator) to the "Unknown" level description field with offset 0x0C that you actually fill with a magic value of '2'.
Answers:
username_0: Maybe a better place for the projection info is the "Usually empty string. Does not seem to be currently used." string in the "Map info sub-block". A standard identifier string like used in WMS/WMTS - "EPSG:3857" - instead of the current empty string would be all that is needed to correctly display the JNX maps.
username_1: JNX format is intended to be used primarily in Garmin devices. I haven't seen black stripes in my Garmin 78. Do you have an example of JNX files generated in nakarte.tk being incorrectly displayed in some Garmin devices?
username_0: I don't have any Garmin device, sorry. But all other JNX maps I have seen are in lat/lon (geographic 2D) projection like described at http://whiter.brinkster.net/en/JNX.shtml. The Garmin devices may draw the map differently so maybe the map looks OK and the tracks are distorted/displaced. However generally something will be wrong when the projection of the map is not the expected one.
Status: Issue closed
username_1: Garmin stretches tiles linearly according to coordinates of corners.
Tiles are relatively small and coordinates of tile corners are correct (I hope they are), so distortions should be neglible. By the way, Garmin displays map with projection which looks similiar to web mercator, so it is possible that jnx from nakarte finally are displayed with less distortions then other maps (but again those distortions should be hard to notice)
username_0: Garmin may stretch the tiles as you say so there are no gaps, but the distortion is still there. I understand, that you don't want to reproject the tiles as it is quiet a lot of work and the error is not that visible on Garmin devices, but adding the projection code to the map info block can help 3rd party tools to display the maps correctly and it is just a one line patch - simply replacing "" (empty string) with "EPSG:3857" is all that is needed. |
keystonejs/keystone | 88783419 | Title: Display relationships in admin lists
Question:
username_0: Is it possible to display relationship values (e.g. a parent) in an admin list using `defaultColumns`?
In the following example, I would to display the parent User for each Post.
```
User.add({
name: { type: String, required: true },
posts: { type: Types.Relationship, ref: 'Post', many: true }
});
------------------------------
Post.add({
title: { type: String, required: true }
});
Post.relationship({ path: 'users', ref: 'User', refPath: 'posts' });
```
Thanks!
Status: Issue closed
Answers:
username_1: I think there are several issues requesting this. https://github.com/keystonejs/keystone/issues/564 https://github.com/keystonejs/keystone/issues/23 |
vapor/toolbox | 176368495 | Title: "vapor docker build" not handling Docker's successful build
Question:
username_0: I could see Vapor Toolbox is not existing even after Docker command succeeds with version 0.10.4. Below is console log. However, I can see docker images are built and ready to run in container.
```
compname:helloworld username$ vapor docker build
Building Docker image [ • ]
compname:helloworld username$ vapor docker run
Copy and run the following line:
docker run --rm -it -v $(PWD):/vapor -p 8080:8080 vapor/swift:DEVELOPMENT-SNAPSHOT-2016-09-06-a
compname:helloworld username$ docker run --rm -it -v $(PWD):/vapor -p 8080:8080 vapor/swift:DEVELOPMENT-SNAPSHOT-2016-09-06-a
Compile CLibreSSL a_bitstr.c
...
```
```
compname:helloworld username$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
vapor/swift DEVELOPMENT-SNAPSHOT-2016-09-06-a 7945a182914f 4 minutes ago 989.4 MB
ubuntu 14.04 4a725d3b3b1c 2 weeks ago 188 MB
```
Status: Issue closed
Answers:
username_1: Closing this due to inactivity. Please feel free to reopen. |
dawndelatte/dawndelatte.github.io | 65104428 | Title: Personal site suggestions for homepage
Question:
username_0: ## suggestions for "latest entry"" smaller and less padding"

## no space after em-dash

## smaller headers in footer. they stand out too much. Also consider a light background for more separation or a line at the top of the footer.

## visually centered "/"
<issue_closed>
Status: Issue closed |
thomasloven/lovelace-state-switch | 802820297 | Title: flip transition and ios
Question:
username_0: Hi
I am having some weird issues the the state-switch in combination with the custom:button-card (https://github.com/custom-cards/button-card).
I dont know if the problem lies with the state-switch or the button, however ill try posting here :)
I have tried to setup the following code: https://pastebin.com/dsPK3dyJ (i removed alot of code to make it more simple to go through)
The idea is to use the state-switch to show different buttons depending on what room (state) is selected.
The problem only happens when using the "flip" transition, and it only happens on iOS. Android and PC works fine.
Whenever i add more than one state, only the last added state is working as it should.
When using buttons on all other states than the last one added, the buttons doesnt activate properly.
Here is an example of what is happening.
Lets say i have 2 states Living room and kitchen (kitchen was added last).
Each state has 3 buttons aligned horizontally.
The buttons are as follows:
Livingroom: desk-lamp - led strip - Ceilling lights
Kitchen: All Lights - table - sink
If i go to the living room state and hit the "Ceilling lights" button it will toggle the "all lights" button in the kitchen tab.
If i hit the "desk lamp" button it will activate the "sink" button on the kitchen tab.
This happens with all the tabs except the last one.
It is always the mirrored button that activates. button 1 activates button 3, button 2->2 and button 3->1
If I substitute the "custom:button-card" with an ordinary button, it works fine.
I hope this made any sense.
Any idea on what is going wrong?
Answers:
username_1: Not the dev, but I eliminated the issue by just turning off the animations entirely and changing the cards based on the URL hash. The dev has a whole section about it, it's very easy.
username_2: I just revamped my mobile dashboards this weekend and I can verify I have the same issue, but only when using the "Flip" transition. Any other transition works (slide and swap).
Only an issue on the Home Assistant iOS app. Both Safari and Chrome on my Mac work as expected.
username_3: Came here to confirm. Still present running on iOS and HA (2021.7.3). I hope this get fixed soon. I only have one iOS device and multiple Android ones, a shame that iOS is holding everything else back (with web). |
DmitriiSer/angular2-collapsible | 492416377 | Title: Always a weird box-shadow
Question:
username_0: I would like to remove the box-shadow, when I set the box-shadow to none. It first applies your baked in box-shadow and than applies my style where it removes it.
You have any way to completely remove box-shadow of collapsible-body? |
nix-community/emacs-overlay | 1065663612 | Title: Why Emacs master not update now?
Question:
username_0: The automatic update of the emacs repo seems not work. The last commit with the description Updated repos/emacs was [6196dc2](https://github.com/nix-community/emacs-overlay/commit/6196dc26f29e155ce5716070a44e59b92ad0b6ca), committed 6 days ago.
Answers:
username_1: ```
2021-12-02T08:26:05.0947158Z pgtkterm.c: In function 'set_fullscreen_state':
2021-12-02T08:26:05.0949047Z pgtkterm.c:4278:10: error: label at end of compound statement
2021-12-02T08:26:05.0952721Z 4278 | case FULLSCREEN_HEIGHT:
2021-12-02T08:26:05.0953114Z | ^~~~~~~~~~~~~~~~~
2021-12-02T08:26:05.1474187Z make[1]: *** [Makefile:409: pgtkterm.o] Error 1
2021-12-02T08:26:05.1478284Z make[1]: Leaving directory '/build/source/src'
2021-12-02T08:26:05.1486289Z make: *** [Makefile:459: src] Error 2
2021-12-02T08:26:05.2885976Z error: builder for '/nix/store/jgrsqpvmq9892rnpdcz2j019iysip8wd-emacs-pgtk-20211202.0.drv' failed with exit code 2;
```
username_0: Does any one can help to fix this? or temporary disable the broken `emacs-pgtk`
username_1: This was fixed upstream: https://github.com/emacs-mirror/emacs/commit/f638541785f0641f3010fa9c4393a4c32710d47e
Status: Issue closed
|
tinymce/tinymce | 299431826 | Title: Indentation Feature Request
Question:
username_0: **Do you want to request a *feature* or report a *bug*?**
FEATURE
**What is the current behavior?**
When Indentation is selected for a block of text, the text indents fine when viewed within Web Browsers however it does not indent within HTML emails. This is due to Indentation currently using Padding in lieu of Margin.
**What is the expected behavior?**
Provide an option to choose indentation method using Margin or Padding.
**Which versions of TinyMCE, and which browser / OS are affected by this issue? Did this work in previous versions of TinyMCE?**
Status: Issue closed
Answers:
username_1: Hi! Seems like we already have this functionality but never added it to the documentation, so I added it now: https://www.tinymce.com/docs/configure/content-formatting/#indent_use_margin
username_0: Thank you! |
jscottsmith/react-scroll-parallax | 307378072 | Title: Choppy parallax on safari
Question:
username_0: Hi,
I'm using your component in an app I'm currently working on, but it's actually choppy on scroll... There are some animations run in the same time, but it works perfectly on Chrome. Firefox is a bit choppy too.
I read on the web that it could be solved with material acceleration. I tried to set up on .parallax-inner the two followings :
```
perspective: 1000;
backface-visibility: hidden;
```
But that hasn't any effect... Any clue why it's so slow and choppy ? And how to solve it ?
Thanks a lot :)
Answers:
username_1: Nope. Hard to know based on this information alone. I also doubt GPU hacks will help much. Most likely it's due to how the component is being used or other heavy animations/processes that are running on the page.
I'll need to see your particular setup (repo, or live site) to provide any useful help/advice.
username_0: Yep. There is sadly an heavy process asked by the customer : a magnetic scroll.
https://github.com/username_0/react-magnetic-scroll
And I'm also using a sticky component... I optimized a bit the magnetic scroll, and now it's perfect on Firefox, but it remains slowly on Safari.
The parallax components are set in a container that hosts all parallax components. This "mother" is positionned absolutely, because each elements have to be over different pages.
I noticed that each parallax element wasn't using the acceleration hack, that's why I was wondering whether there was a solution by your side... :/
thanks for your quick response.
username_1: Thanks for the explanation but like I said I'll need to see the code to provide any useful advice. There's far too many factors to consider otherwise. :)
Let me know if you can't provide the repo or site.
Status: Issue closed
username_2: @username_0 delete the old service worker (if you have one) and try again, sometimes the old cache conflicts with the new one. I had the same situation and solved it by doing so. There's a Chrome plugin called Clear Service Worker that would help.
CC: @username_1
username_1: @username_2 Interesting, I suppose that could be an issue if he has a service worker too. Not sure what his setup looks like though.
username_0: I'm not using any service worker. Anyway, no problem at all with chrome, the issue is with Safari, only Safari... :/
username_1: Are you using v1? Just wondering because the v2 alpha uses the IntersectionObserver which hasn't been implemented in Safari and can result in some stuttering from the polyfill.
Would you mind setting up a repo that reproduces the choppyness? I'd be interested in taking a look as mentioned before :)
username_1: Ya that sounds like the likely culprit.
And v2 is a prerelease so unless you installed it explicitly with `@alpha` or `@next` you should be on v1 -- I wouldn't use v2 in production yet ;) |
briansmith/ring | 975488788 | Title: Ring does not cross compile from linux to x86_64-pc-windows-gnu
Question:
username_0: Ring does not cross compile from linux to x86_64-pc-windows-gnu (using mingw64 toolchain). I guess we need to differentiate between native compilation and cross compilation in `build.rs`.
```
cross build --target x86_64-pc-windows-gnu
Compiling ring v0.17.0-not-released-yet (/project)
error: failed to run custom build command for `ring v0.17.0-not-released-yet (/project)`
Caused by:
process didn't exit successfully: `/target/debug/build/ring-076d38a9c102f180/build-script-build` (exit status: 101)
--- stdout
cargo:rerun-if-env-changed=RING_PREGENERATE_ASM
cargo:rustc-env=RING_CORE_PREFIX=ring_core_0_17_0_not_released_yet_
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
cargo:rerun-if-env-changed=PERL_EXECUTABLE
--- stderr
running "perl" "crypto/chacha/asm/chacha-x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/chacha-x86_64-nasm.asm"
running "perl" "crypto/fipsmodule/aes/asm/aesni-x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/aesni-x86_64-nasm.asm"
running "perl" "crypto/fipsmodule/aes/asm/vpaes-x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/vpaes-x86_64-nasm.asm"
running "perl" "crypto/fipsmodule/bn/asm/x86_64-mont.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/x86_64-mont-nasm.asm"
running "perl" "crypto/fipsmodule/bn/asm/x86_64-mont5.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/x86_64-mont5-nasm.asm"
running "perl" "crypto/fipsmodule/ec/asm/p256-x86_64-asm.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/p256-x86_64-asm-nasm.asm"
running "perl" "crypto/fipsmodule/modes/asm/aesni-gcm-x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/aesni-gcm-x86_64-nasm.asm"
running "perl" "crypto/fipsmodule/modes/asm/ghash-x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/ghash-x86_64-nasm.asm"
running "perl" "crypto/fipsmodule/sha/asm/sha512-x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/sha512-x86_64-nasm.asm"
running "perl" "crypto/cipher_extra/asm/chacha20_poly1305_x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/chacha20_poly1305_x86_64-nasm.asm"
running "perl" "crypto/fipsmodule/sha/asm/sha512-x86_64.pl" "nasm" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/sha256-x86_64-nasm.asm"
running "./target/tools/windows/nasm/nasm" "-o" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/chacha-x86_64-nasm.o" "-f" "win64" "-i" "include/" "-i" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/" "-Xgnu" "-gcv8" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/chacha-x86_64-nasm.asm"
thread 'main' panicked at 'failed to execute ["./target/tools/windows/nasm/nasm" "-o" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/chacha-x86_64-nasm.o" "-f" "win64" "-i" "include/" "-i" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/" "-Xgnu" "-gcv8" "/target/x86_64-pc-windows-gnu/debug/build/ring-0a349917f970412e/out/chacha-x86_64-nasm.asm"]: No such file or directory (os error 2)', build.rs:711:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
``` |
jlippold/tweakCompatible | 564352170 | Title: `Fabric` partial on iOS 13.3.1
Question:
username_0: ```
{
"packageId": "com.creaturecoding.fabric",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.creaturecoding.fabric",
"deviceId": "iPhone10,6",
"url": "http://cydia.saurik.com/package/com.creaturecoding.fabric/",
"iOSVersion": "13.3.1",
"packageVersionIndexed": true,
"packageName": "Fabric",
"category": "Tweaks",
"repository": "CreatureCoding",
"name": "Fabric",
"installed": "1.5.6",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.creaturecoding.fabric",
"commercial": true,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "A native and dynamic lockscreen time view",
"latest": "1.5.6",
"author": "CreatureSurvive",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "partial",
"notes": "Changing Wallpaper not working"
}
```<issue_closed>
Status: Issue closed |
arx-deidentifier/arx | 379160000 | Title: String type attribute masking
Question:
username_0: Hi,
We have to apply data anonymization to our data for which we have selected ARX. Now the problem is, We want that for the string type attributes like name and address, masking should be enabled like if the name value is "Alexander" then it should be transformed to "AlXX***" or similar type of masking so that data wont be compromised and for our QA department testing would be easy. And most importantly this anonymization should not be de-anonymized. It must be one way and cannot be transformed back to "Alexander".
Any help in this regard is appreciated.
Answers:
username_1: Hi,
thanks for your interest in ARX! Why don't you simply define them as identifying (and thereby replace them with "*")? If it should not be possible to go back to the original values, "*" seems to be as good as "AIXX***".
Best
Fabian
username_0: Thanks for your response. I did the same. Declared attribute as identifying then created hierarchy which lead me with 0 to 12 level hierarchy. Also selected privacy model for it named "(2, 1e-6)-Differential privacy" but after anonymization, it is showing "*" replacing all the values. I want only some of the characters masked with esterik "*" so that data can be somehow identified for QA department.
Status: Issue closed
username_1: I'm not sure if I understand your requirements. It seems like, as if you want to implement data masking or pseudonymization. If so, this is currently not supported by ARX. Please see https://github.com/arx-deidentifier/arx/issues/203
What you can try, however, is to set the attribute type for the column to "quasi-identifying", select a generalization level that you want to apply (i.e. specify a fixed minimum and maximum level) and then perform anonymization with a neutral privacy model (I recommend (0, 1)-presence).
For additional help on how to use ARX, you can also take a look at our tutorial: https://www.youtube.com/watch?v=N8I-sxmMfqQ
I will close this issue now. Please direct further questions regarding the use of ARX towards ar<EMAIL> |
saha-dizel/SokobanLab | 258393654 | Title: Генерация уровней
Question:
username_0: Скорее всего стартовые уровни надо будет перенести в блокнот и оттуда считывать, ибо иначе это всё надо будет прописывать в коде, а это пиздец долго и нахуй это нужно. Заодно по сути дела появится возможность добавлять пользовательские уровни.<issue_closed>
Status: Issue closed |
knbarton/ners570-finalproject | 759650164 | Title: Parallize Python Animation Script
Question:
username_0: Issue
=====
The current python script takes more time than the rest of the project to run. The goal is to see if it's possible to run it in parallel and speed it up.
Status: Issue closed
Answers:
username_0: The parallelized python script has been completed. |
rg3915/vendas | 505852253 | Title: Preciso de Ajuda
Question:
username_0: Bom dia Professor,
Estou tendo um problema na hora de inicializar o projeto de vendas, sempre reporta esse erro:
File "C:\Users\user\AppData\Local\Programs\Python\Python37\lib\shlex.py", line 137, in read_token
nextchar = self.instream.read(1)
AttributeError: 'list' object has no attribute 'read'
a linha 137 tem o seguinte código:
while True:
if self.punctuation_chars and self._pushback_chars:
nextchar = self._pushback_chars.pop()
else:
**nextchar = self.instream.read(1)**
if nextchar == '\n':
self.lineno += 1
if self.debug >= 3:
print("shlex: in state %r I see character: %r" % (self.state,
nextchar))
if self.state is None:
self.token = '' # past end of file
instalei a versão 3.7 do python e o django 2
Status: Issue closed
Answers:
username_1: @username_0 atualizei o projeto. Tenta novamente por favor.
username_0: Boa noite Professo,
Muito Obrigado, mas estou lutando com esse erro:
(.venv) C:\Users\easyg\vendas>manage.py makemigrations core
Traceback (most recent call last):
File "C:\Users\easyg\vendas\manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\core\management\__init__.py", line 325, in execute
settings.INSTALLED_APPS
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\easyg\AppData\Local\Programs\Python\Python37-32\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\easyg\vendas\vendas\settings.py", line 13, in <module>
SECRET_KEY = config('SECRET_KEY')
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 197, in __call__
return self.config(*args, **kwargs)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 85, in __call__
return self.get(*args, **kwargs)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 70, in get
raise UndefinedValueError('{} not found. Declare it as envvar or define a default value.'.format(option))
decouple.UndefinedValueError: SECRET_KEY not found. Declare it as envvar or define a default value.
(.venv) C:\Users\easyg\vendas>manage.py makemigrations core
Traceback (most recent call last):
File "C:\Users\easyg\vendas\manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\core\management\__init__.py", line 325, in execute
settings.INSTALLED_APPS
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\easyg\AppData\Local\Programs\Python\Python37-32\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\easyg\vendas\vendas\settings.py", line 17, in <module>
ALLOWED_HOSTS = config('ALLOWED_HOSTS', default=[], cast=Csv())
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 197, in __call__
return self.config(*args, **kwargs)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 85, in __call__
return self.get(*args, **kwargs)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 79, in get
return cast(value)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 233, in __call__
return self.post_process(transform(s) for s in splitter)
File "C:\Users\easyg\vendas\vendas\.venv\lib\site-packages\decouple.py", line 233, in <genexpr>
return self.post_process(transform(s) for s in splitter)
File "C:\Users\easyg\AppData\Local\Programs\Python\Python37-32\lib\shlex.py", line 295, in __next__
token = self.get_token()
File "C:\Users\easyg\AppData\Local\Programs\Python\Python37-32\lib\shlex.py", line 105, in get_token
raw = self.read_token()
File "C:\Users\easyg\AppData\Local\Programs\Python\Python37-32\lib\shlex.py", line 136, in read_token
nextchar = self.instream.read(1)
AttributeError: 'list' object has no attribute 'read'
username_1: @username_0 vc precisa rodar o comando que está no README.
`cp contrib/env-sample .env`
username_0: Muitíssimo obrigado professor,
Eu estou aprendendo a programar e uso o windows, e não consigo rodar esse comando.
username_0: consegui Professor,
Muito Obrigado, me ajudou muito. |
element-plus/element-plus | 757672378 | Title: [Bug Report]: v-on=`listeners` is deprecated since multi-root support
Question:
username_0: Due to the deprecation of `v-on="listeners"` in Vue3, I think the components with this syntax should be replaced with `v-bind="$attrs"`.
Refere to [$listeners removed](https://v3.vuejs.org/guide/migration/listeners-removed.html#_2-x-syntax)
Answers:
username_0: I misunderstood the documentation, closing.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.