repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
prisma/migrate | 730209386 | Title: Error: Generic error: The datasource db already exists in this Schema. It is not possible to create it once more.
Question:
username_0: <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
SInce i updated my prisma versions I've been having problems trying to migrate an existing schema that obviously worked
## How to reproduce
run
```sh
$ prisma migrate save --experimental
```
throws
```sh
yarn dev:migrate
yarn run v1.22.4
$ prisma migrate save --experimental && prisma migrate up --experimental
Environment variables loaded from prisma/.env
Prisma Schema loaded from prisma/schema.prisma
Error: Generic error: The datasource db already exists in this Schema. It is not possible to create it once more.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
show migrate successfully without errors
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
## Environment & setup
```sh
❯ prisma -v
Environment variables loaded from prisma/.env
@prisma/cli : 2.9.0
@prisma/client : 2.9.0
Current platform : debian-openssl-1.1.x
Query Engine : query-engine 369b3694b7edb869fad14827a33ad3f3f49bbc20 (at ../../../../../opt/node/lib/node_modules/@prisma/cli/query-engine-debian-openssl-1.1.x)
Migration Engine : migration-engine-cli 369b3694b7edb869fad14827a33ad3f3f49bbc20 (at ../../../../../opt/node/lib/node_modules/@prisma/cli/migration-engine-debian-openssl-1.1.x)
Introspection Engine : introspection-core 369b3694b7edb869fad14827a33ad3f3f49bbc20 (at ../../../../../opt/node/lib/node_modules/@prisma/cli/introspection-engine-debian-openssl-1.1.x)
Format Binary : prisma-fmt 369b3694b7edb869fad14827a33ad3f3f49bbc20 (at ../../../../../opt/node/lib/node_modules/@prisma/cli/prisma-fmt-debian-openssl-1.1.x)
Studio : 0.296.0
Preview Features : transactionApi
```
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Linux (Ubuntu 20.04)
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> PostgreSQL
- Node.js version: <!--[Run `node -v` to see your Node.js version]--> v14.8.0
- Prisma version: 2.9.0
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
Answers:
username_0: ### Work Around
I've fixed it by deleting the `_Migration` table in my DB and `migrations` folder in Prisma directory, but I don't just know why it stopped migrating automatically
username_1: Thanks for taking the time to report this issue!
We've recently released a Prisma Migrate Preview ([2.1.3.0 release notes](https://github.com/prisma/prisma/releases/tag/2.13.0)), which has quite a few changes compared to the previous experimental version. We believe this issue is no longer relevant in this new version, so we are closing this.
We would encourage you to try out Prisma Migrate Preview. If you come across this or any other issue in the preview version, please feel free to open a [new issue in prisma/prisma](https://github.com/prisma/prisma/issues/new/choose).
For general feedback on Prisma Migrate Preview, feel free to chime in [on this issue](https://github.com/prisma/prisma/issues/4531).
Status: Issue closed
|
vapor/redis | 482823542 | Title: Promises for commands in flight do not return when the connection is dropped
Question:
username_0: Hey @username_1. I am using v3.4.0 (the Vapor 3.0/Redis NIO implementation) and noticed that when requests are in flight and the connection gets dropped, the promise is not getting fulfilled with an error (or anything for that matter), so I never know it happened. The system just hangs.
Easiest way to reproduce is to do a long "brpop" command and kill the connection. You'll see that no error is thrown and no response is received. See below
```swift
print("DEBUG: sending brpop request")
redisClient.send(RedisData.array(["brpop", "queue_key", "5"].map { RedisData(bulk: $0) })).do { data in
print("DEBUG: received data \(data)")
}.catch { error in
print("DEBUG: received error \(error)")
}
```
If I wait the 5 seconds and let the timeout expire, I will get "nil" back as expected. If I kill the connection before that, I get nothing back.
As a workaround, I am going to create a delayed task that will fulfill the promise with an error if I get nothing back by the time it runs, and then cancel it if I do get something back. This feels hacky/expensive but will get the job done for now.
I don't know the NIO code very well, but is there a way to clear out pending promises with an error when the connection closes?
Answers:
username_0: I got a chance to look at this. I was able to resolve this at the RedisClient level. I did the following
- In RedisClient -> Send
- Store the promise when it is created
- Note I used a dictionary since the array "removeObject" is now a pain in Swift
- Remove the promise when it is fulfilled (future.always)
- In RedisClient -> Init -> channel.closureFuture.always
- Iterate through the stored promises and send a "ChannelError"
- Empty the storage object
Diff below
```
bash-3.2$ git diff
diff --git a/Sources/Redis/Client/RedisClient.swift b/Sources/Redis/Client/RedisClient.swift
index 177277a..901ec86 100644
--- a/Sources/Redis/Client/RedisClient.swift
+++ b/Sources/Redis/Client/RedisClient.swift
@@ -20,7 +20,10 @@ public final class RedisClient: DatabaseConnection, BasicWorker {
/// The channel
private let channel: Channel
-
+
+ /// Stores the inflight promises so they can be fulfilled when the channel drops
+ var inflightPromises: [String:EventLoopPromise<RedisData>] = [:]
+
/// Creates a new Redis client on the provided data source and sink.
init(queue: RedisCommandHandler, channel: Channel) {
self.queue = queue
@@ -28,6 +31,12 @@ public final class RedisClient: DatabaseConnection, BasicWorker {
self.extend = [:]
self.isClosed = false
channel.closeFuture.always {
+ // send closed error for the promises that have not been fulfilled
+ for promise in self.inflightPromises.values {
+ promise.fail(error: ChannelError.ioOnClosedChannel)
+ }
+ self.inflightPromises.removeAll()
+
self.isClosed = true
}
}
@@ -55,6 +64,13 @@ public final class RedisClient: DatabaseConnection, BasicWorker {
// create a new promise to fulfill later
let promise = eventLoop.newPromise(RedisData.self)
+ // logic to store in-flight requests
+ let key = UUID().uuidString
+ self.inflightPromises[key] = promise
+ promise.futureResult.always {
+ self.inflightPromises.removeValue(forKey: key)
+ }
+
// write the message and the promise to the channel, which the `RequestResponseHandler` will capture
return self.channel.writeAndFlush((message, promise))
.flatMap { return promise.futureResult }
```
@username_1 If this solution is acceptable, I'll create a PR accordingly. Let me know what you think
username_0: @username_1 I created a PR for this. Use it if you desire. I needed to fix it for my needs regardless.
username_1: @username_0 Sorry for the extremely long delay in response - August has been way too busy for me.
I left a comment on the PR of where the code can live. The primary thing is the handler is missing a good implementation for either `channelInactive` or something else to respond to the channel now closing - which is honestly a problem upstream w/ SwiftNIO Extras as well
username_0: @username_1 That was what I was looking for! I knew the in-flight promises were stored somewhere already. I just needed to go a level deeper in the code. Let me know if my modified code is correct. And no problem being busy. It happens to all of us. |
MasoniteFramework/core | 455003897 | Title: Explicit Python3 and Pip3 in Makefile
Question:
username_0: Just looked at the Makefile and realized something: all the python and pip commands are of the python 2 and pip 2 convention. Due to some weirdness in the Python community, `python` and `pip` are not `python3` and `pip3`. For instance, MacOS' default python installation is 2.7.
Solution: replace all instances of `python` and `pip` with `python3` and `pip3`.
Answers:
username_1: Those commands are made into consideration that you have enabled a virtual environment for Python 3.
username_0: Not all people use virtual environments. Once again, my coworkers are not that fond of them. There are also people who don't know to use a virtual environment.
username_2: This would really only be for Masonite core development / contributing. Not all people use Masonite but also not all application use `python3` aliases. It depends on how you installed Python.
So the solution here is really to be inside a virtual environment to use the command. either solution will break for some people. It'll either break for those who don't use virtual env or break for those who don't have the python3 alias
Status: Issue closed
|
typestack/class-transformer | 671931857 | Title: Is it possible to do deserialization from string to class without writting @transform
Question:
username_0: **I was trying to...**
Let's imagine that i have some class like this:
```
export default class SecureString {
private readonly value: string;
private readonly encodedValue: string;
constructor(original: string) {
this.value = transformValue(original);
this.encodedValue = transformEncodedValue(original);
}
}
```
and I want use it in other class:
```
export default class Test {
password: SecureString;
}
```
then if I try `plainToClass(Test, {password: 'foo'}))`, then it will not transform password to SecureString class.
```
const transformed = plainToClass(Test, {password: 'foo'}));
console.log(transformed);
// return Test { transactionPassword: 'foo' }
```
I can set transform decorator:
```
export default class Test {
@Transform((value) => new SecureString(value))
password: SecureString;
}
```
But then I should write this boilerplate code for each SecureString field.
How I can write my code without boilerplate code?
Also I tried to add Transform for fields in SecureString
```
export default class SecureString {
@Transform((original: string) => transformValue(original), { toClassOnly: true })
private readonly value: string;
@Transform((original: string) => transformEncodedValue(original), { toClassOnly: true })
private readonly encodedValue: string;
constructor(original: string) {
this.value = transformValue(original);
this.encodedValue = transformEncodedValue(original);
}
}
```
But result is the same
<!-- Please detail what you were trying to achieve before encountering the problem. -->
<!-- Paste code snippets if applicable. -->
**The problem:**
Deserialise string to class in other class
<!-- Detail the problem you encountered while trying to achieve your goal. -->
<!-- Paste code snippets if applicable. -->
Answers:
username_0: The best option that I have now is to create custom decorator
```
export function ToSecureString() {
return Transform((value) => new SecureString(value));
}
```
username_1: As you mentioned in your second reply, you need a custom decorator for this.
Status: Issue closed
|
bgruening/docker-galaxy-stable | 217612700 | Title: Unable to find config file './dependency_resolvers_conf.xml'
Question:
username_0: Hi Bjoern,
I just accidentally upgraded to a newer version of your image.
<img width="803" alt="screen shot 2017-03-28 at 12 01 50 pm" src="https://cloud.githubusercontent.com/assets/2761597/24415102/5c7389ce-13ae-11e7-8af7-6864b8486a85.png">
It seems like the galaxy containers won't start -- and it seems like the error is:
```
galaxy.tools.deps DEBUG 2017-03-28 15:12:24,621 Unable to find config file './dependency_resolvers_conf.xml'
```
The documentation about [dependency resolvers](https://docs.galaxyproject.org/en/master/admin/dependency_resolvers.html) doesn't seem to suggest that I would *need* to edit anything or add such a file. The documentation about this repo doesn't seem to suggest I would need to add one either. However, other [flavors](https://github.com/fasrc/fasrc-galaxy) of docker-galaxy seem to have dependency resolver files, but when I add one to the export mount point, I still get the same error. How do I fix this?
Answers:
username_0: @username_1 might have ideas.
username_1: hmmm… i don't see where we copied that in, but it's in the repo. maybe it's a default config. have you checked that the config file's readable by the galaxy user? that file may also not be needed by current versions of the container/galaxy. i'm not an expert in galaxy config, by any means.
username_2: This is not an error, it's more a Galaxy warning message. At startup of a frech container Galaxy should look for the *.sample file but not this file.
@username_0 look for an other error. We have some reports that is related to your storage backend in Docker, try to change this. The image is working and starting on Travis, Quay.io and Dockerhub.
username_0: Thanks @username_2,
It looks like when I tried to add my own dependency resolver file, I get this new error as well:
```
Traceback (most recent call last):
File "lib/galaxy/webapps/galaxy/buildapp.py", line 55, in paste_app_factory
app = galaxy.app.UniverseApplication( global_conf=global_conf, **kwargs )
File "lib/galaxy/app.py", line 98, in __init__
self._configure_toolbox()
File "lib/galaxy/config.py", line 927, in _configure_toolbox
self.reload_toolbox()
File "lib/galaxy/config.py", line 911, in reload_toolbox
self.toolbox = tools.ToolBox( tool_configs, self.config.tool_path, self )
File "lib/galaxy/tools/__init__.py", line 192, in __init__
tool_conf_watcher=tool_conf_watcher
File "lib/galaxy/tools/toolbox/base.py", line 1068, in __init__
self._init_dependency_manager()
File "lib/galaxy/tools/toolbox/base.py", line 1081, in _init_dependency_manager
self.dependency_manager = build_dependency_manager( self.app.config )
File "lib/galaxy/tools/deps/__init__.py", line 41, in build_dependency_manager
dependency_manager = DependencyManager( **dependency_manager_kwds )
File "lib/galaxy/tools/deps/__init__.py", line 84, in __init__
self.dependency_resolvers = self.__build_dependency_resolvers( conf_file )
File "lib/galaxy/tools/deps/__init__.py", line 193, in __build_dependency_resolvers
return self.__default_dependency_resolvers()
File "lib/galaxy/tools/deps/__init__.py", line 202, in __default_dependency_resolvers
CondaDependencyResolver(self),
File "lib/galaxy/tools/deps/resolvers/conda.py", line 62, in __init__
self._setup_mapping(dependency_manager, **kwds)
File "lib/galaxy/tools/deps/resolvers/__init__.py", line 83, in _setup_mapping
mappings.extend(MappableDependencyResolver._mapping_file_to_list(mapping_file))
File "lib/galaxy/tools/deps/resolvers/__init__.py", line 88, in _mapping_file_to_list
with open(mapping_file, "r") as f:
IOError: [Errno 2] No such file or directory: './config/local_conda_mapping.yml.sample'
```
Does that give you any clues?
I'm getting two warnings from tools which recur:
```
galaxy.tools WARNING 2017-03-28 16:08:33,205 Tool toolshed.g2.bx.psu.edu/repos/devteam/bamtools_filter/bamFilter/0.0.2: a when tag has not been defined for 'rule_configuration (rules_selector) --> false', assuming empty inputs.
galaxy.tools.parameters.dynamic_options WARNING 2017-03-28 16:08:33,528 Data table named 'gff_gene_annotations' is required by tool but not configured
```
I see this error near the top of the logs as well:
```
==> /home/galaxy/logs/uwsgi.log <==
return self.__default_dependency_resolvers()
File "lib/galaxy/tools/deps/__init__.py", line 202, in __default_dependency_resolvers
CondaDependencyResolver(self),
File "lib/galaxy/tools/deps/resolvers/conda.py", line 62, in __init__
self._setup_mapping(dependency_manager, **kwds)
File "lib/galaxy/tools/deps/resolvers/__init__.py", line 83, in _setup_mapping
mappings.extend(MappableDependencyResolver._mapping_file_to_list(mapping_file))
File "lib/galaxy/tools/deps/resolvers/__init__.py", line 88, in _mapping_file_to_list
with open(mapping_file, "r") as f:
IOError: [Errno 2] No such file or directory: './config/local_conda_mapping.yml.sample'
```
Do any of those look particularly suspicious?
username_0: I've switched back to an older version (2 months ago) and it runs, everything else held equal.
username_2: You are running it with an export directory?
username_0: yes.
username_2: Jupp, follow the instructions :)
`IOError: [Errno 2] No such file or directory: './config/local_conda_mapping.yml.sample'` this file is new and you need to update it.
username_0: Thanks Bjoern!
Status: Issue closed
|
rust-random/rand | 684588943 | Title: rand_distr no_std
Question:
username_0: Can you please update rand_distr on crates.io? Version on github have feature "std" and can be used without it, but version on crates.io doesn't have any features.
Answers:
username_1: Possibly. We were going to wait until `rand` 0.8 was out, but that could still be a while (unknown). With [this commit](https://github.com/rust-random/rand/commit/0c9e2944055610561eef74247614319301cca1f6) `rand_distr`'s tests pass, so I guess so? @username_2?
There's still a changelog to write.
username_2: I suppose we could just release `rand_distr` 0.3, and release 0.4 after rand 0.8.
@username_1 What changelog is yet to write? I think `CHANGELOG.md` is up to date.
username_1: Oh, it might be (the changelog). :+1:
*Maybe* I'll get to this later today.
username_1: There were a couple of things missed in the changelog. Done.
@username_2 there is no PR because this must be on a [new branch](https://github.com/rust-random/rand/tree/rand_distr). Can you review from here? Once done either of us can make the release.
Then the changes to the changelog should be merged back into master, but not all the changes to `Cargo.toml`...
username_2: @username_1 Looks good to me!
username_1: Published. @username_0 can you test?
username_2: @username_1 I opened #1024 for merging the updated changelog into master.
Status: Issue closed
username_0: Sorry for the absence. Yeah it seems to work fine with no_std thanks for help. |
alextoind/serial | 655383081 | Title: Inconsistency between Unix and Windows implementations
Question:
username_0: During #22 it comes out that there are some functions that behave differently from Unix to Windows.
The library interface should be exactly the same for both the implementations.
Status: Issue closed
Answers:
username_0: Still missing the refactoring of `reconfigurePort()`.
username_0: During #22 it comes out that there are some functions that behave differently from Unix to Windows.
The library interface should be exactly the same for both the implementations.
Status: Issue closed
|
latex3/latex2e | 713580947 | Title: \bigskip, \medskip, \and \smallskip
Question:
username_0: ## Brief outline of the enhancement
**LaTeX2e generally cannot add new features without an extreme amount of care to accommodate backwards compatibility. Please do not be offended if your request is closed for being infeasible.**
## Minimal example showing the current behaviour
```latex
\RequirePackage{latexbug} % <--should be always the first line (see CONTRIBUTING)!
\documentclass{article}
\renewcommand\bigskip[1][1]{\vspace{#1\bigskipamount}}
% instead of \def\bigskip{\vspace\bigskipamount}
\begin{document}
foo
\bigskip
bar
\bigskip[2]
baz
\end{document}
```
It would be nice, if we can have an optional argument for the three macros, whcih is easier to write than
`\vspace{2\bigskipamount}`. Same for the other two length macros
Answers:
username_1: The problem with this is that it is a trailing optional argument, so
```
\bigskip
[put figure here]
\bigskip
```
would suggenly blow up and that isn't a totally obscure scenario.
username_2: Also, the meaning of the integer parameter is not very intuitive.
A better solution would be a user-shorthand macro/command with a mandatory argument, something like `\varbigskip {<scale>}`.
Status: Issue closed
username_3: Note also the definition would need to be more complicated than the suggested
```
\renewcommand\bigskip[1][1]{\vspace{#1\bigskipamount}}
```
as that loses the stretch and shrink components and would make `\bigskip` a fixed length
username_2: Sure, that is a silly nuisance factor!
You will need a little arithmetic to 'preserve the glue'! |
cisco/openh264 | 190356741 | Title: Decoding in VLC
Question:
username_0: Hi all,
i built the codec for macosx and linked to a VLC plugin project (this project https://github.com/ASTIM/OpenH264-VLC-Plugin)
With this and usign VLC parameter --demux=h264 i can correctly play raw h264 file.
The problem is when decoding h264 stream provided via RSTP to my Axis M1125. For every NAL i obtain error number 16. The following is the SDP provided by the Axis
How can i decode this? What kind of NALS do OpenH264 codec expects?
Thanks
```
v=0
o=- 3253925629317495365 1 IN IP4 172.16.16.129
s=Session streamed with GStreamer
i=rtsp-server
t=0 0
a=tool:GStreamer
a=type:broadcast
a=range:npt=now-
a=control:rtsp://172.16.16.129:554/axis-media/media.amp?videocodec=h264
m=video 0 RTP/AVP 96
c=IN IP4 0.0.0.0
b=AS:50000
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1;profile-level-id=4d0029;sprop-parameter-sets=Z00AKeKQDwBE/LgLcBAQGkHiRFQ=,aO48gA==
a=control:rtsp://172.16.16.129:554/axis-media/media.amp/stream=0?videocodec=h264
a=framerate:30.000000
a=transform:-1.000000,0.000000,0.000000;0.000000,-1.000000,0.000000;0.000000,0.000000,1.000000
```
Answers:
username_0: I apologize,
At the end of the day i discovered that is enough to to open the url with an `&spsppsenabled=yes`.
What i do not understand is that i have a lot of errors during buffering
many `dsDataErrorConcealedz` and 0x22 ...
username_1: VLC likely doesn't see (or doesn't support) sprop-parameter-sets. You could emulate it by decoding the parameter sets and inserting them before any data - it comprises a PPS and an SPS.
username_0: Thank you for the answer, i'm new to h264 so i will check those parameters and i will let you know.
username_2: it seems that your question has been fixed. closed this issue. if you still questions, please submit the new issue freely. thanks!
Status: Issue closed
|
backend-br/vagas | 1036699622 | Title: Java na Base2
Question:
username_0: <h1>
<a id="user-content-id" class="anchor" href="#id" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>ID</strong>
</h1>
<ul>
<li>434611590</li>
</ul>
<h1>
<a id="user-content-descrição-da-vaga" class="anchor" href="#descri%C3%A7%C3%A3o-da-vaga" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>Descrição da vaga</strong>
</h1>
<ul>
<li>Java</li>
</ul>
<h1>
<a id="user-content-requisitos-da-vaga-obrigatórios" class="anchor" href="#requisitos-da-vaga-obrigat%C3%B3rios" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>Requisitos da vaga Obrigatórios</strong>
</h1>
<ul>
<li>
<ul>
<li>Experiência em Java 8 ou superior</li>
</ul>
</li>
</ul>
<ul>
<li>Forte experiência em desenvolvimento Java multicamada/corporativo</li>
<li>Forte visão sistêmica (ciclo de desenvolvimento de software)</li>
<li>Banco de Dados Oracle 11 ou superior</li>
<li>Experiência em linguagem SQL (padrão ou PL-SQL Oracle)</li>
<li>JPA/Hibernate</li>
<li>JUnit</li>
<li>SonarQube</li>
<li>Design Patterns (MVC, VO, DAO, BO, Factory, Singleton, etc.)</li>
<li>Eclipse (IDE)</li>
<li>Experiência com APIs REST e SOAP</li>
<li>Preocupação com qualidade do código e funcionalidades</li>
</ul>
<h1>
<a id="user-content-requisitos-desejáveis" class="anchor" href="#requisitos-desej%C3%A1veis" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>Requisitos Desejáveis</strong>
</h1>
<ul>
<li>
<ul>
<li>Conhecimentos na área de CRM</li>
</ul>
</li>
</ul>
<ul>
<li>XHTML/HTML5/CSS3/Javascript</li>
<li>JSP/Servlet/JSF</li>
<li>Spring MVC/SpringBoot</li>
<li>Apache Tomcat 6 ou superior</li>
<li>Plataforma Openshift</li>
<li>Cultura DevOps</li>
<li>Docker (Conteinerização)
* Maven/GIT/GitFlow</li>
<li>Scrum/XP/Metodologias Ágeis</li>
<li>Conhecimento em Cloud Computing (AWS)</li>
</ul>
<h1>
<a id="user-content-atribuições" class="anchor" href="#atribui%C3%A7%C3%B5es" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>Atribuições</strong>
</h1>
[Truncated]
<a id="user-content-como-se-candidatar" class="anchor" href="#como-se-candidatar" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>Como se candidatar</strong>
</h1>
<ul>
<li>Candidatar através do site <a href="http://www.base2.com.br" rel="nofollow">www.base2.com.br</a> ou vagas.base2.com.br</li>
</ul>
<h1>
<a id="user-content-tempo-médio-de-feedbacks" class="anchor" href="#tempo-m%C3%A9dio-de-feedbacks" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>Tempo médio de feedbacks</strong>
</h1>
<p>Costumamos enviar feedbacks em até 07 dias após cada processo.</p>
<h1>
<a id="user-content-labels" class="anchor" href="#labels" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><strong>Labels</strong>
</h1>
<ul>
<li>
<p>Pleno, Sênior</p>
</li>
<li>
<p>CLT</p>
</li>
</ul> |
makojs/mako | 239963965 | Title: Switch to ES6 modules
Question:
username_0: _From @username_0 on November 28, 2016 7:38_
The more time goes on, the more I'd like to investigate using ES6 modules, rather than CommonJS. This would drastically affect the internals of this plugin, but ultimately it is the right direction to go in the long run.
I was previously thinking that I'd implement #17 by analyzing the CommonJS outputs, but kept running into blockers due to it's less-than-static nature. I recently discovered that rollup/webpack only perform tree-shaking on ES6 modules, since they are static by definition. (this drastically reduces the complexity)
All in all, I'm not sure what the right approach here is. The general use-case has been relying on code published to npm, which is generally pre-compiled to CommonJS. I also don't want to become too opinionated by bundling babel directly here to support the ES6/CommonJS divide.
It looks like rollup relies on `jsnext:main` in a `package.json` to reference a file that uses `import` and `export`, but I'm worried about an already fractured build process that uses npm code that assumes ES6 because of node 4+. But maybe I can piggy-back on the efforts of rollup and others.
_Copied from original issue: makojs/js#99_
Answers:
username_0: _From @darsain on November 28, 2016 14:16_
Apart of removal of unused code, the other awesome thing that these tools do is inline all module definitions. Or in other words, instead of mocking `require()` module system in the build, they inline the exported stuff, and thus remove the commonjs module system tax, which speeds up load times considerably.
So I'd be pretty excited to have this in mako as well :) But personally I'm unfamiliar how that works in practice in combination with existing stuff in npm, since I haven't really used rollup or webpack yet.
Also, I'm not aware that there is already a consensus on how are ES modules going to work in node. I know there has been a ton of going back and forth between people who want to make it seamless, with those that want a custom extensions like `.mjs`, and other horrible solutions. I'm afraid this might lead to some clashes later on, but can't give any specifics. Haven't kept up with it 😢.
username_0: _From @username_0 on November 28, 2016 7:38_
The more time goes on, the more I'd like to investigate using ES6 modules, rather than CommonJS. This would drastically affect the internals of this plugin, but ultimately it is the right direction to go in the long run.
I was previously thinking that I'd implement #17 by analyzing the CommonJS outputs, but kept running into blockers due to it's less-than-static nature. I recently discovered that rollup/webpack only perform tree-shaking on ES6 modules, since they are static by definition. (this drastically reduces the complexity)
All in all, I'm not sure what the right approach here is. The general use-case has been relying on code published to npm, which is generally pre-compiled to CommonJS. I also don't want to become too opinionated by bundling babel directly here to support the ES6/CommonJS divide.
It looks like rollup relies on `jsnext:main` in a `package.json` to reference a file that uses `import` and `export`, but I'm worried about an already fractured build process that uses npm code that assumes ES6 because of node 4+. But maybe I can piggy-back on the efforts of rollup and others.
_Copied from original issue: makojs/js#99_ |
go-gitea/gitea | 743300562 | Title: Invalid clone URL for SSH in empty repo view if SSH port is not 22
Question:
username_0: - Gitea version (or commit ref): 1.12.5
- Git version: 2.25.1
- Operating system: Ubuntu 20.04
running gitea with docker compose (see below)
- Database (use `[x]`):
- [ ] PostgreSQL
- [ ] MySQL
- [ ] MSSQL
- [x] SQLite
- Can you reproduce the bug at https://try.gitea.io:
- [ ] Yes (provide example URL)
- [x] No (no port number is required)
- Log gist:
`docker-compose.yml`:
```yml
version: "3"
networks:
gitea:
external: false
services:
gitea:
image: gitea/gitea:1
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- SSH_DOMAIN=localhost:3022
restart: always
networks:
- gitea
volumes:
- ./gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "3022:22"
```
<!-- It really is important to provide pertinent logs -->
<!-- Please read https://docs.gitea.io/en-us/logging-configuration/#debugging-problems -->
<!-- In addition, if your problem relates to git commands set `RUN_MODE=dev` at the top of app.ini -->
## Description
I run Gitea as a container and forward port 3022 to the container's port 22. I've found out that clone URLs provided for my repos are malformed:
```
kacper@kngl:~/git/gitea$ touch README.md
kacper@kngl:~/git/gitea$ git init
Initialized empty Git repository in /home/kacper/git/gitea/.git/
kacper@kngl:~/git/gitea[master]$ git add README.md
kacper@kngl:~/git/gitea[master]$ git commit -m "first commit"
[master (root-commit) 801f90b] first commit
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 README.md
kacper@kngl:~/git/gitea[master]$ git remote add origin git@localhost:3022:Kangel/example.git
kacper@kngl:~/git/gitea[master]$ git push -u origin master
ssh: connect to host localhost port 22: Connection refused
[Truncated]
However, if I modify the URL, it works just fine:
```
kacper@kngl:~/git/gitea[master]$ git remote remove origin
kacper@kngl:~/git/gitea[master]$ git remote add origin ssh://git@localhost:3022/Kangel/example.git
kacper@kngl:~/git/gitea[master]$ git push -u origin master
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 218 bytes | 218.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: . Processing 1 references
remote: Processed 1 references in total
To ssh://localhost:3022/Kangel/example.git
* [new branch] master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
```
## Screenshots

Status: Issue closed
Answers:
username_1: You've put a port into the domain name. Please see https://docs.gitea.io/en-us/config-cheat-sheet/#server-server for setting to use to set a different SSH port to show, in which case it'll properly format the clone URL. |
numba/numba | 155504371 | Title: LoweringError when creating typed numpy arrays in generators
Question:
username_0: ```
import numba
import numpy
@numba.jit
def test_works():
a = numpy.empty(3)
yield 15
test_works()
@numba.jit
def test_fails():
a = numpy.empty(3, dtype='float64')
yield 15
test_fails()
```
The first function compiles fine. However, the second raises a TypeError:
```
TypeError Traceback (most recent call last)
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/lowering.py in lower_block(self, block)
195 try:
--> 196 self.lower_inst(inst)
197 except LoweringError:
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/objmode.py in lower_inst(self, inst)
64 if isinstance(inst, ir.Assign):
---> 65 value = self.lower_assign(inst)
66 self.storevar(value, inst.target.name)
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/objmode.py in lower_assign(self, inst)
156 elif isinstance(value, ir.Yield):
--> 157 return self.lower_yield(value)
158 elif isinstance(value, ir.Arg):
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/objmode.py in lower_yield(self, inst)
186 y = generators.LowerYield(self, yp, yp.live_vars | yp.weak_live_vars)
--> 187 y.lower_yield_suspend()
188 # Yield to caller
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/generators.py in lower_yield_suspend(self)
320 if self.context.enable_nrt:
--> 321 self.lower.incref(ty, val)
322
TypeError: incref() takes 2 positional arguments but 3 were given
During handling of the above exception, another exception occurred:
LoweringError Traceback (most recent call last)
<ipython-input-13-2fbb685864c8> in <module>()
12 a = numpy.empty(3, dtype='float64')
13 yield 15
---> 14 test_fails()
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/dispatcher.py in _compile_for_args(self, *args, **kws)
286 else:
287 real_args.append(self.typeof_pyval(a))
--> 288 return self.compile(tuple(real_args))
289
290 def inspect_llvm(self, signature=None):
[Truncated]
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/lowering.py in lower_function_body(self)
181 bb = self.blkmap[offset]
182 self.builder.position_at_end(bb)
--> 183 self.lower_block(block)
184
185 self.post_lower()
/Users/ndcn0236/miniconda3/lib/python3.4/site-packages/numba/lowering.py in lower_block(self, block)
199 except Exception as e:
200 msg = "Internal error:\n%s: %s" % (type(e).__name__, e)
--> 201 raise LoweringError(msg, inst.loc)
202
203 def create_cpython_wrapper(self, release_gil=False):
LoweringError: Failed at object (object mode backend)
Internal error:
TypeError: incref() takes 2 positional arguments but 3 were given
File "<ipython-input-13-2fbb685864c8>", line 13
```
Answers:
username_1: Seems to be fixed in master already.
Status: Issue closed
username_0: You're right, it does indeed work with the newest version. |
ElektraInitiative/libelektra | 455581503 | Title: Priorization of LCDproc features
Question:
username_0: 1. having the possibility to start LCDproc without any installation/mounting (built-in spec)
2. small binary size
Answers:
username_1: There obviously isn't a general answer.
4 is basically just a lazy way to monitor changes to 2 and 3.
For OpenWRT it probably would be 231 most of the time
For my other systems it will be 321 except for my development workstation, where it is 14.
All these numbers assume that saving one byte in 2 costs one byte in 3. Even on OpenWRT I'd still sacrifice one byte on 2 to save say ten bytes on 3.
In the end there rarely is a true conflict among 234 and 1 is just a question of having some kind of convenience hack for development builds. (Eg at the moment we just pass a non-standard configuration file on the command line, if it isn't installed into the system yet.)
username_2: IMO installed size would include a separate specification file, if it has to be present, while binary size would not include that. The memory footprint is probably also not proportional (at least not in any obvious way) to the binary size. A load of strings will increase both similarly, a load of `keyNew` calls will increase memory usage much more than binary size (because of the key unescaping logic).
username_1: Yes, of course all these things have to be considered. Still I think comparing binary sizes gave us a lot of useful information. Once the obvious issues are solved, I might move to more advanced benchmarks, that involve actually running the code or building binary packages for various platforms.
username_0: Priorities should be fulfilled now.
Status: Issue closed
|
werf/werf | 847304687 | Title: werf helm dep update fails when secret-values are used
Question:
username_0: With the `.helm/secret-values.yaml` file command `werf helm dep update .helm` fails. Without `.helm/secret-values.yaml` works fine.
```
❯ werf helm dep update .helm
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x22099f6]
goroutine 1 [running]:
github.com/werf/werf/pkg/deploy/secrets_manager.(*SecretsManager).GetYamlEncoder(0x0, 0x2e57920, 0xc0008275f0, 0xc000050144, 0x18, 0x0, 0x0, 0x0)
/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/secrets_manager/secret_manager.go:27 +0x26
github.com/werf/werf/pkg/deploy/helm/chart_extender/helpers/secrets.(*SecretsRuntimeData).DecodeAndLoadSecrets(0xc000cdaba0, 0x2e57920, 0xc0008275f0, 0xc00065f900, 0xa, 0x10, 0x0, 0x0, 0x0, 0x7ffff29d7bd6, ...)
/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/chart_extender/helpers/secrets/secrets_runtime_data.go:31 +0x66e
github.com/werf/werf/pkg/deploy/helm/chart_extender.(*WerfChartStub).ChartLoaded(0xc000114cc0, 0xc00065f900, 0xa, 0x10, 0xa, 0x10)
/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/chart_extender/werf_chart_stub.go:73 +0x170
helm.sh/helm/v3/pkg/chart/loader.LoadFiles(0xc00065ed80, 0xa, 0x10, 0x2e7b940, 0xc000114cc0, 0xc000664620, 0x0, 0x0, 0x0)
/home/ubuntu/go/pkg/mod/github.com/werf/helm/[email protected]/pkg/chart/loader/load.go:195 +0x1f6d
helm.sh/helm/v3/pkg/chart/loader.LoadDirWithOptions(0x7ffff29d7bd6, 0x5, 0x2e7b940, 0xc000114cc0, 0xc000664620, 0x0, 0xc00014dac8, 0x40c4c6)
/home/ubuntu/go/pkg/mod/github.com/werf/helm/[email protected]/pkg/chart/loader/directory.go:63 +0x107
helm.sh/helm/v3/pkg/chart/loader.LoadDir(...)
/home/ubuntu/go/pkg/mod/github.com/werf/helm/[email protected]/pkg/chart/loader/directory.go:48
helm.sh/helm/v3/pkg/downloader.(*Manager).loadChartDir(0xc0004afa70, 0x25d9a80, 0xc00061dcc0, 0xc000680000)
/home/ubuntu/go/pkg/mod/github.com/werf/helm/[email protected]/pkg/downloader/manager.go:228 +0x13a
helm.sh/helm/v3/pkg/downloader.(*Manager).Update(0xc0004afa70, 0xc0004afa70, 0x2)
/home/ubuntu/go/pkg/mod/github.com/werf/helm/[email protected]/pkg/downloader/manager.go:154 +0x40
helm.sh/helm/v3/cmd/helm.newDependencyUpdateCmd.func1(0xc000615080, 0xc00068cdc0, 0x1, 0x1, 0x0, 0x0)
/home/ubuntu/go/pkg/mod/github.com/werf/helm/[email protected]/cmd/helm/dependency_update.go:74 +0x1d1
github.com/werf/werf/cmd/werf/helm.NewCmd.func2(0xc000615080, 0xc00068cdc0, 0x1, 0x1, 0x0, 0x0)
/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/helm/helm.go:162 +0x630
github.com/spf13/cobra.(*Command).execute(0xc000615080, 0xc00068cda0, 0x1, 0x1, 0xc000615080, 0xc00068cda0)
/home/ubuntu/go/pkg/mod/github.com/spf13/[email protected]/command.go:850 +0x453
github.com/spf13/cobra.(*Command).ExecuteC(0xc0000f6000, 0x0, 0xc0000cac30, 0x405b1f)
/home/ubuntu/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
/home/ubuntu/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
main.main()
/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/main.go:64 +0x119
```
```
❯ werf version
v1.2.10+fix21
```
Answers:
username_1: Reproduced, will fix ASAP.
Status: Issue closed
|
adobe/jsonschema2md | 279028750 | Title: Documentation should list where a property has been declared
Question:
username_0: If a property has been declared in a referenced schema, it should be included in full in the current document's documentation, with a note explaining that it is referenced from another schema.
For instance, for `Image`:
```markdown
## modify_date
`modify_date` is referenced from [Asset](asset.md#modify_date)
```<issue_closed>
Status: Issue closed |
Azure/autorest-clientruntime-for-java | 441492447 | Title: Class AppServiceMSICredentials doesn't provide a way to pass clientID to call the local MSI endpoint
Question:
username_0: The class AppServiceMSICredentials does not include a definition for passing a clientID of the User-Assigned MSI.
This is a necessary parameter when working with User Assigned MSI. See [this ](https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity#using-the-rest-protocol )
Parameter name | In | Description
-- | -- | --
resource | Query | The AAD resource URI of the resource for which a token should be obtained. This could be one of the Azure services that support Azure AD authentication or any other resource URI.
api-version | Query | The version of the token API to be used. "2017-09-01" is currently the only version supported.
secret | Header | The value of the MSI_SECRET environment variable. This header is used to help mitigate server-side request forgery (SSRF) attacks.
**clientid** | **Query** | **(Optional) The ID of the user-assigned identity to be used. If omitted, the system-assigned identity is used.**
Currently using the Java SDK to call the MSI endpoint results in a 400 bad request as it doesn't include the clientid in the query string.
Probably something like this?
```
@Beta
public AppServiceMSICredentials withClientId(String clientId) {
this.clientId = clientId;
this.objectId = null;
this.identityId = null;
return this;
}
```
Answers:
username_0: This works when I call explicitly via curl in Kudu. We need a way for the clientid to be passed via Java SDK

username_0: Created a new one here:
https://github.com/Azure/azure-sdk-for-java/issues/4024
Status: Issue closed
|
episerver/EPiServer.Forms.Samples | 251991972 | Title: Add UK datetime format to the forms date picker
Question:
username_0: If you have a look in EpiserverFormsSamples.js, around line 7 there is no support for a UK date format in the DateFormats object.
The following line of code needs adding:
"en-gb": {
"pickerFormat": "dd/mm/yy",
"friendlyFormat": "dd/MM/yyyy"
} |
Pathoschild/Wikimedia-contrib | 321533198 | Title: All tools broken
Question:
username_0: `Fatal error: Invalid serialization data for DateTime object in /mnt/nfs/labstore-secondary-tools-project/meta/git/wikimedia-contrib/tool-labs/backend/modules/Cacher.php on line 153`
This message is appearing on all of your tools since yesterday. For example: https://tools.wmflabs.org/meta/userpages/MarcoAurelio
Any idea what's going on?
Thank you!
Status: Issue closed
Answers:
username_1: Fixed. The errors were caused by a cache file containing invalid data somehow; I cleared the cache and everything should be fine now. Thanks for reporting it! |
filerd002/BattleTank | 316524893 | Title: Ujednolicenie elementów gry cz. 1
Question:
username_0: Wiec tak: praktycznie wszystkie elementy biorące udział w grze mają takie funkcje jak ``Update()``, czy ``Draw()``. Dobrze byłoby to zebrać w interfejs, dzięki temu w prostszy sposób można by zarządzać wszystkimi obiektami biorącymi udział w grze.
Answers:
username_0: Można by się zainteresować tą klasą wbudowaną w XNA: https://docs.microsoft.com/en-us/previous-versions/windows/xna/bb196397(v=xnagamestudio.41) |
mbeyeler/opencv-machine-learning | 375296521 | Title: HTML:Bankfraud-LV detected by Avast AV; download/connection aborted
Question:
username_0: Avast AV detects HTML:Bankfraud-LV & aborts download/connection. Here's the relevant line from the Avast log file:
10/29/2018 10:20:07 PM https://codeload.github.com/username_1/opencv-machine-learning/zip/master|>opencv-machine-learning-master\notebooks\data\chapter7\BG.tar.gz|>BG.tar|>BG\2004\09\1095786133.25239_75.txt|>PartNo_0#2719639900 [L] HTML:Bankfraud-LV [Trj] (0)

Answers:
username_1: Lol what the... Is this for real?
username_1: Alright, so this is from the chapter on [classifying spam emails](https://github.com/username_1/opencv-machine-learning/blob/2513edab423a4f067ebcc4c867203db1064c35c7/notebooks/07.02-Classifying-Emails-Using-Naive-Bayes.ipynb), which uses the publicly available [Enron-Spam dataset](http://nlp.cs.aueb.gr/software_and_datasets/Enron-Spam/index.html) from Athens University. This dataset contains a number of innocuous emails (classified as 'ham') and just a bunch of spam.
The file in question is an example of a spam email (from the 'BG' folder) pretending to be from CitiBank. So in a sense, Avast is right that this is a phishing email trying to commit bank fraud 😄 but as a text file it certainly does not contain a Trojan.
I know that's easy to say, so maybe if you don't trust it, the easiest way out is not to download the data for Chapter 7. You could download the data directly from the original source if you trust them ([nlp.cs.aueb.gr](http://nlp.cs.aueb.gr/software_and_datasets/Enron-Spam/index.html)) or you could use any other email dataset instead (e.g., Athens University has another popular one that's called the [Ling-Spam corpus](http://www.aueb.gr/users/ion/data/lingspam_public.tar.gz)).
The content of the text file in question is printed below.
Cheers,
Michael
```
Return-Path: <<EMAIL>>
Delivered-To: <EMAIL>
Received: (qmail 16009 invoked by uid 115); 21 Sep 2004 15:57:50 -0000
Received: from <EMAIL> by churchill by uid 64011 with qmail-scanner-1.22
(clamdscan: 0.75-1. spamassassin: 2.63. Clear:RC:0(192.168.127.12):.
Processed in 0.44637 secs); 21 Sep 2004 15:57:50 -0000
Received: from hrbg-66-33-237-222-nonpppoe.dsl.hrbg.epix.net (192.168.127.12)
by churchill.factcomp.com with SMTP; 21 Sep 2004 15:57:50 -0000
X-Message-Info: 9hGWsd261aRE/mzVydeHYhYPycntP8Kd
Received: from <EMAIL>ly ([120.186.99.115])
by cou76-mail.moyer.adsorb.destitute.cable.rogers.com
(InterMail vM.5.01.05.12 159-633-015-445-824-04571071) with ESMTP
id <<EMAIL>>
for <<EMAIL>>; Tue, 21 Sep 2004 10:51:47 -0600
Message-ID: <333460z0xomp$1316d6m0$3641a4c0@phylogeny>
Reply-To: "<NAME>" <<EMAIL>>
From: "<NAME>" <<EMAIL>>
To: <<EMAIL>>
Subject: ATTN: Security Update from Citibank MsgID# 92379245
Date: Tue, 21 Sep 2004 21:51:47 +0500
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="--419776125423611"
----419776125423611
Content-Type: text/html;
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<title>Untitled Document</title>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859=
-1">
</head>
<body bgcolor=3D"#FFFFFF" text=3D"#000000">
CITIBANK(R)
<p>Dear Citibank Customer:</p>
<p>Recently there have been a large number computer terrorist attacks over=
our
database server. In order to safeguard your account, we require that you=
update
your Citibank ATM/Debit card PIN. </p>
<p>This update is requested of you as a precautionary measure against frau=
d. Please
note that we have no particular indications that your details have been =
compromised
[Truncated]
etails
at hand.</p>
<p>To securely update your Citibank ATM/Debit card PIN please go to:</p>
<p><a href=3D"http://219.138.133.5/customer/">Customer Verification
Form</a></p>
<p>Please note that this update applies to your Citibank ATM/Debit card - =
which
is linked directly to your checking account, not Citibank credit cards.<=
/p>
<p>Thank you for your prompt attention to this matter and thank you for us=
ing
Citibank!</p>
<p>Regards,</p>
<p>Customer Support MsgID# 92379245</p>
<p>(C)2004 Citibank. Citibank, N.A., Citibank, F.S.B., <br>
Citibank (West), FSB. Member FDIC.Citibank and Arc <br>
Design is a registered service mark of Citicorp.</p>
</body>
</html>
```
Status: Issue closed
|
Atlantiss/NetherwingBugtracker | 394641091 | Title: [NPC] Troll Roof Stalkers detected from ground floor of buildings
Question:
username_0: [//]: # (Enclose links to things related to the bug using http://wowhead.com or any other TBC database.)
[//]: # (You can use screenshot ingame to visual the issue.)
[//]: # (Write your tickets according to the format:)
[//]: # ([Quest][Azuremyst Isle] Red Snapper - Very Tasty!)
[//]: # ([NPC] Magistrix Erona)
[//]: # ([Spell][Mage] Fireball)
[//]: # ([Npc][Drop] Ghostclaw Lynx)
[//]: # ([Web] Armory doesnt work)
[//]: # (To find server revision type `.server info` command in the game chat.)
**Description**:
Level 70 players going about their business in Orgrimmar will hear unstealthing sounds over and over again as the stealthed Troll Roof Stalkers can be easily detected through the roofs from the ground floor of multiple buildings.
There are several buildings that can be used to recreate this. Inside the Auction House was where I first noticed it. Going up the tower to the Orgrimmar Flight Master will result in you hearing the Stalker on a nearby roof unstealth multiple times as your character's direction faces them and turns away repeatedly due to the spiral ramp. I find Kaja's Boomstick Imports next to the Auction House to the most severe as both the second floor ramp and the roof are obstructing your vision of the above Troll Roof Stalker yet he still becomes unstealthed.
**Current behaviour**:
Troll Roof Stalkers can be detected by level 70 players from inside many buildings.
**Expected behaviour**:
I feel that a Stalker on the roof wouldn't be easily detected by people on the ground floor.
**Server Revision**:
2515
Answers:
username_1: I reported the same few years ago. http://tbc.internal.atlantiss.eu/showthread.php?tid=1353&highlight=sound
https://www.youtube.com/watch?v=SZYCX7uq50Y
its pretty annoying
username_2: Are their stealth "level" just really low? You can see them from really far away, compared to for example finding a stealthed rogue even same faction.
Because not only I feel you really shouldn't be able to find a stealthed person so far above you, there is also implications in pvp on this report that was "closed". I still believe you shouldn't be able to see a stealthed person above you.
https://gfycat.com/BountifulRemoteCygnet
#2791
username_2: What I'm saying is that is the stealth of the troll really shit and are you able to generally NPC stealthed mobs more easily than players? Because this is where you can detect a same faction 70 rogue without MOD

With MOD

And about how far you can detect the trolls

Either way, I think there is definitely something going on how you can detect stealthed people from way above you, but I think the problem is that their stealth level is too low
username_3: rev.3472 not an issue anymore.
Can be closed.
Status: Issue closed
|
beardofedu/example-repository | 527302103 | Title: There is a bug!
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.<issue_closed>
Status: Issue closed |
DiscipleTools/disciple-tools-mobile-app | 666450436 | Title: Unrequested data change: removed group from a parent/peer/child group field
Question:
username_0: When there are 2 or more groups listed in a parent/peer/child group field, the first group stays but the others are removed when group details are changed (setting the church start date).
The same type if issue happens for a contact.
Answers:
username_1: Confirmed on iOS 13.5.1



username_1: Hi @raschdiaz , we need to prioritize this issue for Sprint 24 in order to perform a release this weekend. Thx!
username_2: This is applicable for all the "Connections" type fields.
username_1: Note: **_Step # 4_** of the `Steps to reproduce` is the **_key_**. You must Add a Group to `Parent`, `Peer`, or `Child` Groups, and then click to navigate to `Group Details`, and `Edit`, and `Save`.
Status: Issue closed
username_0: @username_1 @raschdiaz this issue happened again in the current 1.7 version of the app when I was updating a record on my instance. I will send you the details privately. |
SqixOW/Aim_Docs_Translator | 686947309 | Title: 번역 신청합니다.
Question:
username_0: https://www.dropbox.com/s/fkvf1vi03mb5u7u/Aimer7%20Routine%20Addendum.pdf?dl=0
tammas라는 사람의 aimer7 루틴 보완? 업데이트? 버전이라고 합니다. 레딧의 댓글들을 흝어보니 현 시점에선 aimer7 루틴 본판보다는 이걸로 시작하는게 더 좋다는 얘기가 많았습니다.
https://docs.google.com/spreadsheets/d/1hAzqdi-dl5_RrURx9gnFBo9-MOghurleY3eAALht5Kk/edit#gid=0
이건 이미 알고계실지도 모르겠지만 에임 관련 가이드들을 모아놓은 문서입니다. 참고하세요. 감사합니다.
Answers:
username_0: 아 위에 올린 tammas의 루틴 링크는 이전 버전인것같네요. https://www.dropbox.com/s/w316s768shjissf/Tammas%27%20Routine%20Addendum.pdf?dl=0
이쪽이 최신판으로 보입니다!
Status: Issue closed
username_0: https://www.dropbox.com/s/fkvf1vi03mb5u7u/Aimer7%20Routine%20Addendum.pdf?dl=0
tammas라는 사람의 aimer7 루틴 보완? 업데이트? 버전이라고 합니다. 레딧의 댓글들을 흝어보니 현 시점에선 aimer7 루틴 본판보다는 이걸로 시작하는게 더 좋다는 얘기가 많았습니다.
https://docs.google.com/spreadsheets/d/1hAzqdi-dl5_RrURx9gnFBo9-MOghurleY3eAALht5Kk/edit#gid=0
이건 이미 알고계실지도 모르겠지만 에임 관련 가이드들을 모아놓은 문서입니다. 참고하세요. 감사합니다.
username_1: 대기열 등록 완료
Status: Issue closed
|
plaid/plaid-python | 1178106929 | Title: asset_report_get raising TypeError when asset report has warnings
Question:
username_0: We recently upgraded from plaid-python 7.x (non OpenAPI version) to plaid-python `9.1.0`
Since the upgrade we are seeing a case where `client.asset_report_get` throws the following error:
```
TypeError: __init__() missing 1 required positional argument: 'error'
```
It looks like if an asset report contains any warnings in the `warnings` field, then the client fails to deserialize the response from the API correctly.
I cannot easily provide a way to reproduce but I'll try to provide as much info as possible.
In our application we listen for the assets webhook:
```py
{
"asset_report_id": "xxx--xxx--xxx",
"webhook_code": "PRODUCT_READY",
"webhook_type": "ASSETS"
}
```
during the handling of that webhook, we use `client.asset_report_get` to get the contents of the report and determine if we need to apply filtering.
We saw an example where after receiving a `PRODUCT_READY` notification for a report, when we tried to get the report we got the `TypeError` mentioned above. I can see the response that came back from the plaid API looked like this:
```py
{
report: {
asset_report_id: 'xxxx',
client_report_id: None,
date_generated: '2022-03-23T11:53:28Z',
days_requested: 19,
items: [...],
user: {}
},
request_id: 'qh5EJhxemMpgEsS',
warnings: [
{
cause: {"display_message":"'This financial institution is not currently responding to requests. We apologize for the inconvenience.'","error_code":"'INSTITUTION_NOT_RESPONDING'","error_message":"'this institution is not currently responding to this request. please try again soon'","error_type":"'INSTITUTION_ERROR'","item_id":"'xxxxxx'"},
warning_code: 'OWNERS_UNAVAILABLE',
warning_type: 'ASSET_REPORT_WARNING'
}
]
}
```
The stack trace was this:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/code.py", line 90, in runcode
exec(code, self.locals)
File "<console>", line 1, in <module>
File "/<redacted>/venv/lib/python3.9/site-packages/plaid/api_client.py", line 769, in __call__
return self.callable(self, *args, **kwargs)
File "/<redacted>/venv/lib/python3.9/site-packages/plaid/api/plaid_api.py", line 1308, in __asset_report_get
return self.call_with_http_info(**kwargs)
File "/<redacted>/venv/lib/python3.9/site-packages/plaid/api_client.py", line 831, in call_with_http_info
[Truncated]
The call to `deserialize_model` that failed had the following args
```
check_type=True
configuration=<plaid.configuration.Configuration object at 0x7f5733d00790>
kw_args={_check_type: True, _configuration: <plaid.configuration.Configuration object at 0x7f5733d00790>, _path_to_item: ['received_data', 'warnings', 0, 'cause'], _spec_property_naming: True, display_message: 'This financial institution is not currently responding to requests. We apologize for the inconvenience.', error_code: 'INSTITUTION_NOT_RESPONDING', error_message: 'this institution is not currently responding to this request. please try again soon', error_type: 'INSTITUTION_ERROR', item_id: 'v66OZpaA48TRXb1vveBrT450YdENeBUmPRMAO'}
model_class=<class 'plaid.model.cause.Cause'>
model_data={display_message: 'This financial institution is not currently responding to requests. We apologize for the inconvenience.', error_code: 'INSTITUTION_NOT_RESPONDING', error_message: 'this institution is not currently responding to this request. please try again soon', error_type: 'INSTITUTION_ERROR', item_id: 'v66OZpaA48TRXb1vveBrT450YdENeBUmPRMAO'}
path_to_item=['received_data', 'warnings', 0, 'cause']
spec_property_naming=True
```
Answers:
username_1: Thank you for the super detailed error report, it was very helpful! It looks like the root cause here is probably that our OpenAPI file is misrepresenting the schema of the `cause` object -- it specifies it as an `item_id` and a nested `error` object when your copy and paste of the response data shows that the `error` object is actually "flattened" in the response, rather than nested. I'm working on a fix now. Do you guys need a hotfix urgently, or is it ok if we wait for the next regularly scheduled release in ~3 weeks?
username_0: I don't want to be too pushy, if we could get a fix that would be ideal, but we could probably wait too. This is a bit off topic to the original issue but the reason we decided to upgrade to the OpenAPI based client is because we were migrating to the Account Select V2 feature which according to [the docs](https://plaid.com/docs/link/account-select-v2-migration-guide/#automatic-migration) - "we will automatically migrate all of your customizations to Account Select v2"
We made all the preparations and enabled it for all of our plaid link customizations in production but we had missed the one r[ecommendation which was to upgrade our python client to at least `8.3.0`](https://plaid.com/docs/link/account-select-v2-migration-guide/#minimum-version-for-account-select-update-mode) (we were on 7.x).
We noticed after enabling account select v2 with the old python client, that in the plaid link flow, users who were redirected back to our application after authenticating with an Oauth institution would be presented with an account selection screen within plaid link after after they had already been presented with an account selection screen on the bank side. We reached out to plaid support and they confirmed that a user who goes through the Oauth flow shouldn't see a second account selection step in plaid link. Their recommendation was to update our plaid python client.
To be honest I don't really understand this recommendation given that the flow is entirely frontend and even though we're on an old python client, we're targeting the latest Plaid API. Unless there is something different about how the newer clients call out to create the plaid link token or the API is doing something differently based on the client version in the request headers.
Again - this is off topic but if you know of any workarounds to get the desired plaid link frontend experience using the outdated client, then we'd be happy to try that out while you get a release out on your regular schedule!
username_1: The only reason we recommend upgrading the client library is to enable use of the `update.account_selection_enabled` parameter when initializing Link for Update Mode, in order to use Update Mode to pick up new accounts detected by the `NEW_ACCOUNTS_AVAILABLE` webhook. This parameter is not supported in older client libraries. If you aren't using this, you can continue to use older client libraries.
While it is true that we can sometimes skip the Plaid Link account selection team after OAuth account selection, the business logic around this is quite complicated and we do not always skip the pane -- it depends on things like whether you are using multi-select or single-select, whether you use account filtering, and how the specific institution that's being connected to implements their OAuth account selection options. The fact that the pane isn't being skipped is likely expected and shouldn't have anything to do with the client library. |
habx/apollo-multi-endpoint-link | 991056611 | Title: Missing dependencies
Question:
username_0: Looks like tscov is missing from dev dependencies
Adding this and running
npm run type:coverage
shows error
Error: ENOENT: no such file or directory, stat '/home/username_0/projects/git/www/apollo-multi-endpoint-link/@habx/config-ci-front/typescript/library.json'
Answers:
username_1: Hi @username_0,
We didn't intend to manage it. [I removed it ](https://github.com/habx/apollo-multi-endpoint-link/pull/195)but if you find a way to fix it, PRs are welcome :)
Status: Issue closed
|
YoEight/eventstore-rs | 598593146 | Title: Questions about connection's state
Question:
username_0: Hello (again),
I'm trying to set up a pool of `eventstore::Connection` by creating an adapter for the crate [`l337`](https://docs.rs/l337/0.4.1/l337/) and I have some questions about how to do certain things with this library (`eventstore`).
I hope my questions and remarks will be relevant, it's been just a few weeks since I started to play with Rust so be lenient :slightly_smiling_face:
First of all, I find pretty strange that `ConnectionBuilder::single_node_connection()` and `ConnectionBuilder::cluster_nodes_through_gossip_connection()` directly return a `Connection` and not a `Result<Connection, _>`.
They are async functions so it suggests that they are trying to contact the EventStore server, which may be unreachable so it would make more sense to get a `Result`.
For the need of the adapter, I need a way to know if the connection is healthy.
How I'm doing it currently is by making this request : `connection.read_all().max_count(0).execute().await`.
Do you think there is a better (more performant) way to check if the connection is (still) alive ?
Wouldn't it be possible to get the connection state in a synchronous fashion ? Maybe it's another subject than my previous question.
I think of a state that we could check at any time (a method on `Connection` ?), which would be updated when the server stops responding for example, or when a request failed.
I don't know if you can be informed in any way when the connection with the server is lost, to update such a state ?
There is one last thing I wanted to mention, it doesn't have much to do with the rest but I noticed some logs like `HEARTBEAT TIMEOUT` or `Persistent subscription lost connection` in the EventStore server when calling `connection.shutdown().await`, that makes me think to a badly closed connection.
Here are the logs in question :
```
# The first request is made
[00001,05,21:53:11.511] External TCP connection accepted: [Normal, 172.17.0.1:47828, L172.17.0.2:1113, {b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}].
[00001,26,21:53:11.511] Connection '"external-normal"' ({b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}) identified by client. Client connection name: '"ES-37f46d53-dd9b-4c66-bddb-b7030049fd24"', Client version: V2.
# The `.shutdown().await` is called, followed by `sleep(Duration::from_secs(10))`, but the following logs appears only 5 seconds after
[00001,14,21:53:16.524] Closing connection '"external-normal"":ES-37f46d53-dd9b-4c66-bddb-b7030049fd24"' [172.17.0.1:47828, L172.17.0.2:1113, {b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}] cleanly." Reason: HEARTBEAT TIMEOUT at msgNum 3"
[00001,14,21:53:16.525] ES "TcpConnection" closed [21:53:16.525: N172.17.0.1:47828, L172.17.0.2:1113, {b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}]:Received bytes: 134, Sent bytes: 204
[00001,14,21:53:16.525] ES "TcpConnection" closed [21:53:16.526: N172.17.0.1:47828, L172.17.0.2:1113, {b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}]:Send calls: 4, callbacks: 4
[00001,14,21:53:16.525] ES "TcpConnection" closed [21:53:16.526: N172.17.0.1:47828, L172.17.0.2:1113, {b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}]:Receive calls: 4, callbacks: 3
[00001,14,21:53:16.525] ES "TcpConnection" closed [21:53:16.526: N172.17.0.1:47828, L172.17.0.2:1113, {b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}]:Close reason: [Success] "HEARTBEAT TIMEOUT at msgNum 3"
[00001,14,21:53:16.525] Connection '"external-normal"":ES-37f46d53-dd9b-4c66-bddb-b7030049fd24"' [172.17.0.1:47828, {b3d1959c-4a64-42d9-baa3-aac9d7f13bbf}] closed: Success.
[00001,20,21:53:16.528] Persistent subscription lost connection from 172.17.0.1:47828
```
Maybe it's completely normal, but I'm not sure :thinking:
Thank you (again) for taking the time to answer my questions.
Answers:
username_1: While I don't exclude the possibility that something could have been left out open, it's improbable. When shutting down, the state-machine blocks further new command and prevents existing one from sending new packages. The state machine completes every pending command (like persistent subscriptions) before closing the TCP connection. I would investigate your scenario.
username_1: Next time, please consider the [Discord] server for questions too.
[Discord]: https://discord.gg/x7q37jJ
username_0: Thank you for your detailed answers.
I still have some questions but I'll go to the Discord server and ask them as you advise :wink:
Status: Issue closed
|
andrewmcdan/open-hardware-monitor | 64167970 | Title: Open Hardware Monitor 0.5.1 Beta not foundet my HDD's ???
Question:
username_0: ```
What is the expected output? What do you see instead?
I see not my HDD's i have 2 HDD's SAMSUNG 830 Series SSD + 1 HDD 1TB normly
SATA 2 my complette hardware see here ---> http://www.sysprofile.de/id21540
What version of the product are you using? On what operating system?
Open Hardware Monitor 0.5.1 Beta / Windows 7 SP 1
Please provide any additional information below.
I have Open Hardware Monitor 0.5.1 Beta and HWiNFO 4.09-1810 installed. HWiNFO
foundet my HDD's and Open Hardware Monitor not why ???
Please attach a Report created with "File / Save Report...".
See my Snapshot !!!
```
Original issue reported on code.google.com by `<EMAIL>` on 13 Dec 2012 at 6:01
Attachments:
- [Bild 2.jpg](https://storage.googleapis.com/google-code-attachments/open-hardware-monitor/issue-421/comment-0/Bild 2.jpg)<issue_closed>
Status: Issue closed |
servicecatalog/development | 278487007 | Title: Get VOUsageLicense object to assign subscription
Question:
username_0: Hi,
I am trying to assign subscription to all the user from 'XYZ' org. Am using addRevokeUser method. I am able to remove the subscription by sending VOUser list. But i am not able to get VOUsageLicense object to pass this method. How can i get this? Please suggest.
Answers:
username_1: Hi @username_0
not sure if I fully got your intention here. Note that subscriptions can only be assigned to existing users of the respective customer. Thus, condition in your case is that "'XYZ' org" is a customer of the supplier organization providing the service. Otherwise you should look for AccountService API to register a customer, as administrator of given supplier organization. See AccountService#registerCustomer respectively AccountService#registerKnownCustomer.
For adding all users your code should look something like following.
`
IdentityService id = connectAsSusbcriptionAdmin();
// Collect usage licenses
List<VOUsageLicense> licences = new ArrayList<VOUsageLicense>();
List<VOUserDetails> users = id.getUsersForOrganization();
for (VOUserDetails user : users) {
VOUsageLicense usageLicence = new VOUsageLicense();
usageLicence.setUser(user);
licences.add(usageLicence);
}
// Add users
SubscriptionService subService= ...;
subService.addRevokeUser(subscriptionId, licences, null);
`
Status: Issue closed
|
juj/fbcp-ili9341 | 473696841 | Title: #define ALL_TASKS_SHOULD_DMA
Question:
username_0: /home/pi/fbcp-ili9341/spi.cpp: In function ‘void RunSPITask(SPITask*)’:
/home/pi/fbcp-ili9341/spi.cpp:281:24: error: ‘SPIDMATransfer’ was not declared in this scope
SPIDMATransfer(task);
^
/home/pi/fbcp-ili9341/spi.cpp:288:26: error: ‘WaitForDMAFinished’ was not declared in this scope
WaitForDMAFinished();
Answers:
username_1: the two functions are declared in https://github.com/username_1/fbcp-ili9341/blob/aa01d12cd78cd15fd655a96b4811eb9437da136a/dma.h#L127-L135
conditional to `USE_DMA_TRANSFERS` being defined. `spi.cpp` does include that file: https://github.com/username_1/fbcp-ili9341/blob/aa01d12cd78cd15fd655a96b4811eb9437da136a/spi.cpp#L13
so this suggests that you were having a build where `ALL_TASKS_SHOULD_DMA` was defined, but `USE_DMA_TRANSFERS` was not. Pushed a commit to verify that code should not be getting compiled with such mismatching directives. Run a `git pull` and double check build flags?
username_0: Got it thankyou username_1
Status: Issue closed
|
thymikee/jest-preset-angular | 428907174 | Title: InlineHtmlStripStylesTransformer is not strict enough
Question:
username_0: ## Current Behavior
_Any_ object literals have their property assignments transformed thus causing source code such as:
```ts
console.log({
styles: ['styles.css']
})
```
to print out
```ts
{
styles: []
}
```
## Expected Behavior
Only object literals passed to `@angular/core`'s `@Component` decorator should have it's property assignments transformed.
Answers:
username_0: Unless someone else wants to tackle this, I can try and make a PR.
username_1: PR is welcome 👍😀
username_2: Yeah, this caveat is known and discussed
* in the [transformer itself](username_8/jest-preset-angular/src/InlineHtmlStripStylesTransformer.ts#L11-L14)
* the [original PR]()
* this [follow-up PR](username_8/jest-preset-angular/pull/211), caused by
* [this issue](https://github.com/username_8/jest-preset-angular/issues/210).
Problem is, that theoretically someone might rename the annotation, or pull an object into the annotation, that is not declared inside the annotation itself.
The AST transformer does not know if an object is actually *the* Angular Component Decorator or if an object with `styles` and `templateUrl` is actually a component configuration, it only knows in what relation code pieces are and how they are named.
Consider these examples:
```ts
import { Component } from '@angular/core';
// classic and easy
@Component({ templateUrl: './file.html' }) export class MyFirstComp { }
// difficult
const MyAnnotation = Component;
@MyAnnotation({ templateUrl: './file.html' }) export class MySecondComp { }
// also difficult
const componentConfig = { styles: [], templateUrl: './file.html' };
@Component(componentConfig) export class MyThirdComp { }
// also difficult
const partialComponentConfig = { templateUrl: './)
@Component({...partialComponentConfig, styles: []}) export class MyThirdComp { }
```
We should discuss approaches and their caveats before implementing it.
username_3: Would it be an expectation to even support the second case?
```
const MyAnnotation = Component;
@MyAnnotation({ templateUrl: './file.html' }) export class MySecondComp { }
```
For the others we are guaranteed to be in `@Component`
username_2: I agree, the second example is not very important.
The third and fourth are the problematic ones.
@username_3 The AST transformer is written in TypeScript, but it works with AST objects. AST is a representation of the code text, not of the evaluated objects.
So this situation
```ts
const componentConfig = { styles: [], templateUrl: './file.html' };
@Component(componentConfig) export class MyThirdComp { }
```
is definitely a problem, as in this example each line is an independent object. References are made when JavaScript is interpreted, not when the AST transformer is executed. The `styles` assignment is not inside the decorator and therefore we would have to search for the `componentConfig` variable in the whole file, maybe even in other files if it is imported.
username_0: @username_2 I see the issues. That is fair.
Let's make it not as strict as looking for where the object literal is.
I think if we check if the file imports Component from `@angular/core` it will be safe enough. The difficult cases are still satisfied since they all have to import from `@angular/core`.
Then we can exclude files which do not include:
```ts
import { Component } from '@angular/core';
import * as core from '@angular/core';
import { Component as Comp } from '@angular/core';
```
What do you think?
username_2: Ok, so one more thought I just had, `templateUrl` is actually more important to be transformed than `styles` to be removed. Without `templateUrl` being transformed to `template`, it will not even load.
`styles` on the other hand is just removed for performance issues (correct me if I'm wrong), as we assume styles are usually not tested in jest, that is better done in e2e frameworks.
### Proposal
We could also think about handling them differently, although this might complicate the code unnecessarily.
**`templateUrl`**:
* replaced everywhere
* alternatively only (as suggested by @username_0) in files where `Component` gets imported from `@angular/core` or `TestBed` gets imported from `@angular/core/testing`
**`styles`**:
* replaced only if declared directly inside component decorators
This would seem more practical to me as `styles` is also used in more contexts than `templateUrl`.
username_1: I think the proposal looks practical and good. I vote for it.
username_4: If someone lands here before that PR is merged - I was able to hack around the problem temporarily by changing:
`mockObj = {
styles: 'a {color: blue}'
}`
to
`mockObj = {}
mockObj.styles = 'a {color: blue}'`
username_5: This error was driving me mad until I found this issue.
Glad to see a PR is on the way
username_6: Wow, discovered this today as well. Suddenly a unit test broke after upgrading to Jest 24.
username_2: Yeah, I am almost done with the PR, will submit it next week.
It's not really an error, it is an intrusive caveat that kept the transformer simple, but can create side-effects.
username_6: I renamed my property as an easy workaround. 😉
username_7: This is a nightmare to debug. We were lucky to find it by searching thought node_modules. Is there any that you can do this without using AST Transformations? Or if you are going to use AST Transformations, only modify it when you are sure that it is appropriate?
username_2: @username_7 The PR is on the way already #261
If you want to avoid them, you can remove AST Transformers by overriding the array in your jest config and clearing it. To still use jest without the transformers, you will have to inline your templates and styles though.
Also you can test the branch of the PR with the appropriate implementation on `username_2:feat/ast-transformer-strict-styles` in your project using (in your `package.json`):
`"jest-preset-angular": "github:username_2/jest-preset-angular#feat/ast-transformer-strict-styles"`
Testing and reviewing the implementation can speed up the merge.
Finally you are free to implement and use custom transformers.
Status: Issue closed
|
FreeUKGen/MyopicVicar | 111723300 | Title: Serious Performance issue on File delete
Question:
username_0: For those watching issue 579 they will know that we were having an issue with orphan entries and records.
This was traced to a mistake in the code. A delete was NOT destroying entries and search records because the delete does not initiate the call back that is needed for their destruction.
The solution is trivial. The recovery painful and the long term implications significant. Why?
The act of deleting records causes a significant performance impact. The destruction of a file of 100 brings the servers to their knees for several seconds. This is because the deletion and index adjustments is propagated in real time to all servers and they place locks until the writes are completed.
Answers:
username_0: Perhaps we spin the file delete as a rake task with the appropriate throttling.
Should read
Perhaps we run the file delete as a rake task in the nightly update with the appropriate throttling.
username_1: I am in favor of moving long-running tasks to a background task to be run via forking a rake call.
I'm dismayed that deleting and adjusting indexes for 40K records of our 30M is such a long-running task. (I'm not questioning Kirk's observations here -- rather I'm just frustrated that the version of MongoDB we run doesn't do this more painlessly.)
In addition to Kirk's proposed technical fix #1 (pushing file deletion into the background) and social fix #2 (discouraging unnecessary file deletions), I'd like to propose a few additional options for consideration:
3) Spend some time researching MongoDB replication, deletion and B-tree rebalancing to figure out if we're doing something wrong.
4) Upgrade to the latest version of MongoDB, which has had one major release and probably a few minor releases after the version we're running. (Note that this would require time from the developers to test compatibility and performance -- especially for replicated deletes)
5) Postpone the problem by making record deletion actually flip an "i_am_deleted_dont_look_at_me" bit on the freereg1_csv_file, freereg1_csv_entry, and search_record collections, then modify the search engine, admin tools, and DAP to filter those deleted entries out. We could then set up a long-running sweeper task on a cron job to delete the "deleted" records pokily enough that we don't get MongoDB all stirred up. (If this sounds like I'm proposing we implement our own garbage collection, I can't disagree.)
I do think that #1 is a must-do, and #2 sounds appealing (though I don't know how to make it happen). I don't feel like any of 3-5 are magic solutions, but Kirk asked for other options, so I thought I'd oblige.
username_2: it on GitHub.
username_3: Interestingly ( and before I read BenB's comment above) I had been running through my mind the idea of Don't Destroy records then! Just shunt them off into a false home - as-it-were, where they are not accessible in searches. but could be dealt with at a mass event in the future.
Something like what Ben suggests but better put, in (5).
username_3: To respond to EricD's comment above about deleting files to tidy up church names is the only way to do some of them - and i say this from having been on the job of doing it .
The automatic Case changing feature in F2 is causing much of that workload - Problem issue #466 I think. - if that issue were dealt with, then changing church names would not be so necessary.
username_0: EricD wrote: "I think that deleting these Orphans can wait, now that we know the cause which has been fixed."
Unfortunately while we know the cause it has NOT been fixed. The whole point of this story is that a fix cannot be implemented until we have decided how to do it.
Every batch delete done will cause a new set of orphans.
username_0: Responding to both EricD and EricB on what FR actions require a batch delete be performed.
The answer is "none" they are all done in place. All relocations, name changes and owner changes are implemented in place with just changes to pointers.
If there are situations that do lead to the need for a batch delete please raise in another story.
The only currently known situations are 1) the same file existing in different userids coming from FR1 and 2) the batch is no longer required.
username_0: Doug wrote: What version of mongdb is running on production? The version on vervet is 2.6.7, which has database-level locking. It looks like newer versions support document-level locking, which may significantly improve the performance of writes to update the indexes (or may not-- hard to tell for a specific use-case without actually trying).
username_0: We are running the same version in both development and production 2.6.7,.
username_0: Doug wrote:
Additional reading indicates that document-level locking requires use of a new storage engine called WiredTiger, but I would recommend against that since it looks like WiredTiger is not very mature yet. Version 3 does support collection-level locking with the standard MMAPv1 storage engine, which still may improve the performance quite a bit. But like Ben said, that would require compatibility testing, etc., and there is no guarantee the performance increase would be sufficient to do the clean-up in real-time. That being the case, my opinion is that it is probably better to go ahead with one of your other suggested solutions for now (ignoring orphans in the search / cleaning up orphans as part of the nightly cron taks).
username_0: The major change in Mongo 3 was the introduction of a new storage engine which is theif future; it is still not a stable storage engine from what I see and we would have to think carefully about adopting it. Although it appears that different slaves can operate with different engines. Our use of Mongo is quite straightforward so most of the apparent incompatibilities do not affect us; its the unknown that bite.
username_0: Doug wrote:Looks like our opinion about the new storage engine is the same.
username_0: Yes we seem to be on the same wavelength wrt WiredTiger.
Collection locking is unlikely to be helpful since we are deleting search records and that is what the searches are hitting.
username_0: Rake task written, tested. Tested on t3, applied to colobus and 1 file of my own has been deleted. Will see if rake task runs as expected as part of production suite.
Status: Issue closed
|
ngrx/platform | 593828161 | Title: Using NgRx Data for non-CRUD operations
Question:
username_0: Hi,
Could you guide me on how to use NgRx Data for non-CRUD operations like:
1. Implementing and storing navigation between different pages?
2. Adding more methods to an entity service like sorting?
Answers:
username_1: Thanks @username_0, but these questions are better asked on the [gitter channel](https://gitter.im/ngrx/platform)
Status: Issue closed
|
jakkulabs/PowervRA | 149481414 | Title: ISSUE: Remove-vRAReservation - Id Parameter in URI should be $ReservationId not $Id
Question:
username_0: The by id block needs updating to the following:
***
'ById' {
foreach ($ReservationId in $Id) {
if ($PSCmdlet.ShouldProcess($ReservationId)){
$URI = "/reservation-service/api/reservations/$($Id)"
Write-Verbose -Message "Preparing DELETE to $($URI)"
$Response = Invoke-vRARestMethod -Method DELETE -URI "$($URI)"
Write-Verbose -Message "SUCCESS"
}
***
Answers:
username_0: Updated incorrect snipped $ReservationId
Status: Issue closed
username_0: The by id block needs updating to the following:
***
'ById' {
foreach ($ReservationId in $Id) {
if ($PSCmdlet.ShouldProcess($ReservationId)){
$URI = "/reservation-service/api/reservations/$($ReservationId)"
Write-Verbose -Message "Preparing DELETE to $($URI)"
$Response = Invoke-vRARestMethod -Method DELETE -URI "$($URI)"
Write-Verbose -Message "SUCCESS"
}
***
Status: Issue closed
|
daydaychallenge/leetcode-scala | 629718780 | Title: 125. Valid Palindrome
Question:
username_0: #### [125. Valid Palindrome](https://leetcode.com/problems/valid-palindrome/)
Given a string, determine if it is a palindrome, considering only alphanumeric characters and ignoring cases.
**Note:** For the purpose of this problem, we define empty string as valid palindrome.
**Example 1:**
```
Input: "A man, a plan, a canal: Panama"
Output: true
```
**Example 2:**
```
Input: "race a car"
Output: false
``` |
JuliaLang/julia | 1072850038 | Title: faster `hash` for specific types
Question:
username_0: ```julia
julia> isequal(BitSet(1:100000), Set(1:100000))
true
```
so
```julia
julia> hash(BitSet(1:100000))
0x0bbb2afc9d6e69a0
julia> hash(Set(1:100000))
0x0bbb2afc9d6e69a0
```
`hash` needs to be defined so that `isequal(x,y)` implies `hash(x)==hash(y)`, and it is good that two sets with the same members are `isequal`, even if their internal representation differs.
But the performance...
```julia
julia> @time hash(BitSet(1:100000))
0.000281 seconds (4 allocations: 12.469 KiB)
0x0bbb2afc9d6e69a0
julia> @time hash(BitSet(1:100000).bits)
0.000009 seconds (4 allocations: 12.469 KiB)
0x8f1d49c388957d15
```
It would be nice if we could have the best of both worlds. What about a hash function like `hash2(domain, x, h)` with a fallback
`hash2(::Type{<:Any}, x, h) = hash(x, h)`, but then e.g. `hash2(::Type{BitSet}, x::BitSet, h) = hash(x.bits, h)`
Then code like
https://github.com/JuliaLang/julia/blob/30fe8cc2c19927cf4b4a5fe1ba1cbf4c2b7b7d84/base/dict.jl#L280-L300
would make sure that the type parameter `K` gets to the 3-argument hash function.
The original property that `isequal(x,y)` implies `hash(x)==hash(y)` would still hold.
A new property emerges:
`isequal(x,y)` implies `hash2(T, x) == hash2(T, y)` where `T = promote_foo(typeof(x), typeof(y))` and `promote_foo` could be something simple like `promote_foo(S, T) = ifelse(S == T, S, Any)`
Answers:
username_1: This seems related to the proposal at https://github.com/JuliaLang/julia/pull/37964. |
WebClub-NITK/Hacktoberfest-2k21 | 1012483637 | Title: Script for Changing Wallpapers based on TOD
Question:
username_0: ### Description
Main aim here is to write a script to change wallpapers on the home screen of an Ubuntu(any Debian-based Linux Distro), based on the time of day.
Any scripting language can be used for this task - Eg: python, bash, Ruby, etc.
As soon as user switches on his/her system a wallpaper should be displayed that corresponds to current TOD, and should be updated as the day progresses.
Special task : Figure out the crontab config (if required) for above script, if any is needed.
### Details
- Technical Specifications:
- Any Scripting Language
- Type of issue: Single.
- Time Limit: 24 hours(from the moment a user is assigned this issue).
### Resources
List of resources that might be required / helpful.
- https://www.digitalocean.com/community/tutorials/how-to-use-cron-to-automate-tasks-ubuntu-1804
- https://linuxconfig.org/bash-scripting-tutorial
- https://www.geeksforgeeks.org/os-module-python-examples/
### Directory Structure
Create a folder named Linux_Script_Wallpaper_Switcher in the Development folder. File should be named as wallpaper_script.[py,sh,rb].
#### Note
1. Please claim the issue first by commenting here before starting to work on it.
2. Once you are done with the task and have created a Pull Request, please tag @username_0 to request a review.
Answers:
username_1: Hey @username_0, I would like to work on this issue!
Also by changing the wallpaper based on the time of day, how frequently do we want the wallpaper changed? |
laravel/horizon | 246017588 | Title: Horizon & database queue driver
Question:
username_0: I see a lot of references to Redis but can Horizon also be used with the database driver?
Answers:
username_1: At this moment it only supports Redis.
username_0: To bad, anyway thanks for the clear answer. Should make some time to switch out my driver 👍
Status: Issue closed
|
ImperialAlex/XLA_FixedArsenal | 50308417 | Title: Add Documentation (github readme/Wiki)
Question:
username_0: Add documentation, both for the vanilla behaviour as well as for the custom extensions.
This might, depending on length/detail be part of the readme.md on the main repo page,
or it might be moved to the repo's wiki.<issue_closed>
Status: Issue closed |
vitalii-andriiovskyi/ngx-owl-carousel-o | 727902829 | Title: Ignoring items, ignoring autoplay
Question:
username_0: ```
customOptions: OwlOptions = {
loop: true,
margin: 30,
nav: false,
autoplay: true,
autoplayTimeout: 4000,
smartSpeed: 1000,
items:1,
navText: ['<i class="fa fa-angle-left"></i>', '<i class="fa fa-angle-right" ></i>'],
responsive: {
0: {
items: 1,
},
600: {
items: 1
},
1000: {
items: 1
}
}
}
```

they do not scroll. I can click and drag and they kinda bounce back into place. This is not how my options are set, why are my options being ignored? Version ^2
Answers:
username_0: Never mind, I put the options in the wrong component
Status: Issue closed
|
pythonarcade/arcade | 1014040589 | Title: Unclosed file in `setup.py`
Question:
username_0: ## Bug Report
### Actual behavior:
`setup.py` opens a file, but doesn’t close it.
### Expected behavior:
If `setup.py` opens any files, it closes them before finishing.
### Steps to reproduce/example code:
```
$ python -W always -- setup.py --name
arcade/setup.py:7: ResourceWarning: unclosed file <_io.TextIOWrapper name='arcade/version.py' mode='r' encoding='
UTF-8'>
exec(open("arcade/version.py").read())
ResourceWarning: Enable tracemalloc to get the object allocation traceback
arcade
$
```<issue_closed>
Status: Issue closed |
kalexmills/github-vet-tests-dec2020 | 758443996 | Title: PureFusionLLVM/llgo: third_party/gofrontend/libgo/go/go/internal/gccgoimporter/gccgoinstallation_test.go; 10 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/PureFusionLLVM/llgo/blob/76a21ddaba394a63b7a216483394afe07ac3ef5d/third_party/gofrontend/libgo/go/go/internal/gccgoimporter/gccgoinstallation_test.go#L187-L196)
<details>
<summary>Click here to show the 10 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range [...]importerTest{
{pkgpath: "io", name: "Reader", want: "type Reader interface{Read(p []uint8) (n int, err error)}"},
{pkgpath: "io", name: "ReadWriter", want: "type ReadWriter interface{Reader; Writer}"},
{pkgpath: "math", name: "Pi", want: "const Pi untyped float"},
{pkgpath: "math", name: "Sin", want: "func Sin(x float64) float64"},
{pkgpath: "sort", name: "Ints", want: "func Ints(a []int)"},
{pkgpath: "unsafe", name: "Pointer", want: "type Pointer unsafe.Pointer"},
} {
runImporterTest(t, imp, nil, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 7<PASSWORD>afe07ac3ef5d<issue_closed>
Status: Issue closed |
dhenry-KCI/TMDL-_Tracking-Widget | 66743341 | Title: Phase II - SHA DEV - potential tree sites display 1 contract, while the tab is actually null
Question:
username_0: Possible DB Issue -
TMDL Strategies > Tree Planting > site name 080215UT-A
Contract (1) > no contract associated
Answers:
username_1: @username_0, I can't find this strategy. Can you provide more information? Thanks!
username_0: @username_1 - the site is in district 5 - Charles County. This issue should persist with any of these tree sites with a status of "Proposed" or "Potential." It does not necessarily have to be the exact site I used as an example. If you still cant find one let me know! Thanks
username_1: @username_0, I can't find a planting tree with a status of 'Potential' on KCI DEV. For the planting tree with a status of 'Proposed', it seems working.
username_0: @username_1 this was present in SHA DEV.
username_2: Cannot find that SITE_NAME value in LUCHANGE_BMP in the database. Can you be more specific about your selection process so we can replicate it exactly? (You don't state a selection criteria -- district =? or county?)
username_0: @username_2 I apologize, I thought I left the comment that it is District 5 > Charles county. Let me know if you cant find it and I will either find an different example or do a screen share.
username_2: Grace -- I think the issue is when there is no contract associated, the count will still say (1) on the Contracts tab. It probably then causes an issue with the count when adding one. I found one without a contract behaving in this way, that you could test in KCI dev:
TMDL Strategies --> Planting Trees --> District 3
TMDL = 160104UT (can just sort by the Contract column, it is the only empty one)
username_1: @username_3, for the example Tara provided(TMDL activity id is '1c514530-2724-4c2a-9741-e0f0849ed628'), the contract count returned is 1. But there is no contract number info for this contract. Can you take a look at database? Thanks.
username_3: Fixed. Please test it again.
username_0: verified that this is working in KCI DEV
username_0: verified that this is working in SHA DEV
Status: Issue closed
|
johnny-morrice/webdelbrot | 152412249 | Title: Become pure JS
Question:
username_0: Absorb webdelbrot-static and become pure JS client of restfulbrot (godelbrot REST service).
Answers:
username_0: This is a silly idea. Instead, I will mark this project obsolete and carry on the new ideas in a new project called wurstbrot-web.
username_0: No! This project shall be alive, and shall contain go code. It shall contain go code that we process within gopherjs. It will be a client of a new restclient library that will be added to godelbrot.
That library will not call net/http directly, but provide useful operations for a restclient.
Then in wurstbrot-web, we shall take that code and send it our callbacks to javascript.
Status: Issue closed
username_0: Fixed with merge of 18b296d |
NationalSecurityAgency/ghidra | 767016953 | Title: ARM: Incorrect disassembly of some thumb2 bl instructions as vst4.x in Ghidra 9.2
Question:
username_0: **Describe the bug**
Ghidra 9.2 disassembles thumb2 bl <negative offset around 0x3xxxxx> as vst4.x and adds warning bookmark
"Instruction pcode is unimplemented"
Ghidra 9.1.2 and other disassemblers correctly disassemble the same code as bl
**To Reproduce**
- Use the attached .o file or compile the .S to .o with arm gcc (I verified the test case using 4.9.3, but it should work in anything that understands the assembly syntax)
`arm-none-eabi-gcc -c -march=armv7-a -o ghidra-bl-bug.o ghidra-bl-bug.S`
- Import the .o file in Ghidra, accept default options
- Open and analyze with default options
**Expected behavior**
The disassembly at 0034301e should be
`0034301e cc f4 ef ff bl bl_target`
not
`0034301e cc f4 ef ff vst4.64 {d31 [],d1[],d3[],d5[]},[r12 ]`
**Attachments**
Attached file is a zip with source and .o file of a minimal test case to reproduce the issue.
[ghidra-bl-bug.zip](https://github.com/NationalSecurityAgency/ghidra/files/5692137/ghidra-bl-bug.zip)
**Environment:**
- OS: Windows 10
- Java Version: 11.0.2
- Ghidra Version: 9.2 public release
**Additional context**
I haven't determined the exact bit pattern that triggers the problem, but it seems to happen for a wide range around -0x3xxxxx.
Ghidra defaults to loading the .o as armv8le (likely related to #1521), but the disassembly is the same if you force armv7le to match the -march option.
Answers:
username_1: Does look like a copy/paste problem. I've accepted and committed the PR. Good find and diagnosis.
Status: Issue closed
username_2: Corrected for 9.2.2 release |
ncabatoff/process-exporter | 171149737 | Title: Please make releases available
Question:
username_0: This stuff is great, please make builds available for Linux (X86, X64, ARM builds would be amazing)
Answers:
username_1: Sure, I've been meaning to get familiar with Travis CI anyway. Might not get to it until next week though.
username_2: :+1:
username_1: I've got a pre-release up now, let me know if it works for you. I haven't tested any of builds except linux-amd64. Note that auto-discovery of CPU/IO-consuming processes has been removed, so if that's a feature you cared about this won't help you. On the plus side it can now include child process resource consumption, and the numbers are now more accurate.
username_3: Hi @username_1 and thanks for the work. This exporter is very helpful!
The latest code allow configuration from a yaml file (-config.path) but it is not released yet, it would be great to have it.
username_1: Done.
Status: Issue closed
|
Sitecore/Sitecore-Instance-Manager | 507285406 | Title: It is possible to close the Solr Configuration Editor with the validation errors
Question:
username_0: Go to Settings->Solrs->'...'.
Put something into the Name column and leave others empty.
Click Ok.
For a moment you see a red '!' sign of error, but the window is closed and the new Solr configuration is not added.
1. It must not be possible to close the window using Ok button if there are errors.
2. The user must be notified about the errors that prevent closure.
3. There should be a cancel button to close the window if the user does not want to fix errors.
Answers:
username_1: fixed in #280
username_0: @username_1 please check the following scenario:
https://www.screencast.com/t/BHemnvVNoV
username_1: fixed
Status: Issue closed
|
dlang/dub | 308389777 | Title: Generated manual pages have errors
Question:
username_0: Hi!
When making the Debian/Ubuntu package use the new manual pages provided by Dub directly, our QA system found out that the generated pages have errors.
That doesn't prevent us from shipping them, but it's an issue that should be addressed.
```
dub: manpage-has-errors-from-man usr/share/man/man1/dub-build.1.gz 13: warning: numeric expression expected (got `f')
dub: manpage-has-errors-from-man usr/share/man/man1/dub-convert.1.gz 15: warning: numeric expression expected (got `f')
dub: manpage-has-errors-from-man usr/share/man/man1/dub-describe.1.gz 39: warning: numeric expression expected (got `b')
```
Offending lines are for example `.IP -f, --force`, so I guess just fixing the generator to properly escape hyphens will do the trick. |
rust-lang/rust | 57124891 | Title: rustdoc doesn't parse cfgspecs properly
Question:
username_0: It treats something like `feature=bar` as a single `MetaWord` instead of a `MetaNameValue`. This is due to the code on [lines 70-73 of rustdoc/test.rs](https://github.com/rust-lang/rust/blob/134e00be7751a9fdc820981962e4fd7ea97bfff6/src/librustdoc/test.rs#L70-L73):
```rust
cfg.extend(cfgs.into_iter().map(|cfg_| {
let cfg_ = token::intern_and_get_ident(&cfg_);
P(dummy_spanned(ast::MetaWord(cfg_)))
}));
```
I have a PR for this that I will open shortly.<issue_closed>
Status: Issue closed |
faircloth-lab/phyluce | 954182202 | Title: issue spades: Expected assembly files were not found in output.
Question:
username_0: Hi,
I used spades to assembly my data and I successfully did it in all my samples, but not in five. I'm having a warning that says "Expected assembly files were not found in output". At the end I have empty contig files from those samples.
I do not understand why in this group of samples Spades is working for some samples but not for all. I wonder if you could have an idea what is happening.
Best!
2021-07-20 11:30:40,319 - phyluce_assembly_assemblo_spades_tmp - INFO - ========= Starting phyluce_assembly_assemblo_spades_tmp =========
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Version: 1.7.1
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Commit: None
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --config: /ddnB/work/habromys/U1/spades/spades115.conf
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --cores: 16
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --dir: None
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --log_path: /work/habromys/U1/spades/spades115_spa
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --memory: 32
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --no_clean: False
2021-07-20 11:30:40,320 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --output: /ddnB/work/habromys/U1/spades/spades115_spa
2021-07-20 11:30:40,321 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --subfolder:
2021-07-20 11:30:40,321 - phyluce_assembly_assemblo_spades_tmp - INFO - Argument --verbosity: INFO
2021-07-20 11:30:40,321 - phyluce_assembly_assemblo_spades_tmp - INFO - Getting input filenames and creating output directories
2021-07-20 11:30:40,347 - phyluce_assembly_assemblo_spades_tmp - INFO - ------------- Processing MZFC13091_Peromyscus_beatae ------------
2021-07-20 11:30:40,348 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 11:30:40,353 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 11:30:40,354 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 11:36:35,418 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 11:36:35,420 - phyluce_assembly_assemblo_spades_tmp - CRITICAL - Expected assembly files were not found in output.
2021-07-20 11:36:35,421 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 11:36:35,421 - phyluce_assembly_assemblo_spades_tmp - INFO - ---------- Processing MZFC9736_Peromyscus_carolpattonae ---------
2021-07-20 11:36:35,422 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 11:36:35,426 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 11:36:35,426 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 11:44:08,398 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 11:44:08,401 - phyluce_assembly_assemblo_spades_tmp - CRITICAL - Expected assembly files were not found in output.
2021-07-20 11:44:08,401 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 11:44:08,402 - phyluce_assembly_assemblo_spades_tmp - INFO - -------- Processing MZFC10521_Peromyscus_gratus_gentilis --------
2021-07-20 11:44:08,403 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 11:44:08,407 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 11:44:08,407 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 11:50:52,040 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 11:50:52,043 - phyluce_assembly_assemblo_spades_tmp - CRITICAL - Expected assembly files were not found in output.
2021-07-20 11:50:52,043 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 11:50:52,044 - phyluce_assembly_assemblo_spades_tmp - INFO - ----------- Processing MZFC15046_Peromyscus_spicilegus ----------
2021-07-20 11:50:52,044 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 11:50:52,049 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 11:50:52,049 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 12:30:30,717 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 12:30:30,901 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 12:30:30,902 - phyluce_assembly_assemblo_spades_tmp - INFO - ------- Processing MZFC12350_Reithrodontomys_fulvescens_2 -------
2021-07-20 12:30:30,902 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 12:30:30,906 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 12:30:30,906 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 13:22:28,227 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 13:22:28,351 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 13:22:28,352 - phyluce_assembly_assemblo_spades_tmp - INFO - ------------ Processing MZFC7841_Scotinomys_teguina_3 -----------
2021-07-20 13:22:28,352 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 13:22:28,356 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 13:22:28,356 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 15:11:23,499 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 15:11:23,595 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 15:11:23,596 - phyluce_assembly_assemblo_spades_tmp - INFO - ------------ Processing MZFC11015_Habromys_schmidlyi ------------
2021-07-20 15:11:23,596 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 15:11:23,600 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 15:11:23,600 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 15:18:34,027 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 15:18:34,029 - phyluce_assembly_assemblo_spades_tmp - CRITICAL - Expected assembly files were not found in output.
2021-07-20 15:18:34,029 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 15:18:34,030 - phyluce_assembly_assemblo_spades_tmp - INFO - ----------- Processing MZFC11166_Peromyscus_mexicanus -----------
2021-07-20 15:18:34,031 - phyluce_assembly_assemblo_spades_tmp - INFO - Finding fastq/fasta files
2021-07-20 15:18:34,035 - phyluce_assembly_assemblo_spades_tmp - INFO - File type is fastq
2021-07-20 15:18:34,035 - phyluce_assembly_assemblo_spades_tmp - INFO - Running SPAdes for PE data
2021-07-20 15:27:35,672 - phyluce_assembly_assemblo_spades_tmp - INFO - Removing extraneous assembly files
2021-07-20 15:27:35,674 - phyluce_assembly_assemblo_spades_tmp - CRITICAL - Expected assembly files were not found in output.
2021-07-20 15:27:35,674 - phyluce_assembly_assemblo_spades_tmp - INFO - Symlinking assembled contigs into /ddnB/work/habromys/U1/spades/spades115_spa/contigs
2021-07-20 15:27:35,675 - phyluce_assembly_assemblo_spades_tmp - INFO - ========= Completed phyluce_assembly_assemblo_spades_tmp ========
Answers:
username_1: I am having the same issue. Have you figure this out already?
username_2: It is very likely that Spades is running out of RAM to assemble the reads you have for the samples that are not assembling. There are two potential ways to fix this issue:
1. Find a machine with sufficient RAM and set the `--memory` parameter to ~ 4 GB below the amount of RAM on this machine, e.g. `--memory 60`. Or even better, `--memory 250`. The value you use is the number, in GB, of RAM to allocate to Spades.
2. Alternatively, you can downsample the reads you have for the individuals that are failing to assemble, using an approach [like this](http://protocols.faircloth-lab.org/en/latest/protocols-computer/snippets/random-computer-snippets.html#subsample-reads-for-r1-and-r2-using-seqtk). I usually downsample to something between 3 million reads (in total; so 1.5 million in R1 and 1.5 million in R2) and 6 million reads.
username_0: I had not not clarify the allowed memory paramter. It worked after I did. Thank you.
Status: Issue closed
|
SEED-platform/seed | 136829734 | Title: Clean up handling of filterable db columns on BE
Question:
username_0: The way we specify whether db columns are able to be filtered on the BE is messy and prone to developer error. Specifically looking at `build_json_params`, `get_mappable_types` helper methods and `search_building_snapshots` view.
Figure out how all of these are used and create a convenient class to generate these.
Answers:
username_1: Columns have been cleaned up and this is no longer valid.
Status: Issue closed
|
milesgranger/fresh | 335198746 | Title: Model runner (Local)
Question:
username_0: A way to parallelize running of models / creating "heats" for an arbitrary number of models to begin competing.
Could have a 'local' as well as 'distributed' configuration; for now in this issue we'll focus on getting parallel training of models locally to work. |
hackoregon/civic-devops | 335196675 | Title: Was the hacko-data-archive bucket meant to be Public?
Question:
username_0: It may be that the configuration of the S3 bucket `hacko-data-archive` is configured with Public permissions, allowing access to resources such a using boto (AWS SDK for Python) to access static files #195.
Or maybe the access from an ECS-based container inherits some minimal access via AWS to be able to read data from certain parts of the bucket.
I don't remember - was this intentional? Do we consider all archived data backups (original raw files from outside agencies and bureaus, as well as the derived data from those raw sources) to be public domain assets?
Answers:
username_0: Once [PR 51](https://github.com/hackoregon/housing-2018/pull/51) is merged into Housing-2018 project, we can turn off public access to this bucket. |
gitbucket/gitbucket | 188149825 | Title: Upgrade to Scala 2.12
Question:
username_0: Scala 2.12 version of Scalatra 2.5.0-RC1 has been released, but Slick is still 2.12.0-M5. We will start updating our code using blocking-slick after Scala 2.12 version of Slick release.
Answers:
username_1: slick for Scala 2.12 released
http://repo1.maven.org/maven2/com/typesafe/slick/slick_2.12/3.2.0-M2/
Status: Issue closed
|
eslint/eslint | 96870823 | Title: Shareable config package prefix are not checked correctly
Question:
username_0: As discovered in [#3136](https://github.com/eslint/eslint/pull/3136/files#r35300730) the package prefix `eslint-config-` of shareable configs is not checked to appear exactly at the beginning of the package name. So currently package names like `fooeslint-config-bar` are allowed.
Answers:
username_1: Yeah, let's get that fixed.
Status: Issue closed
username_3: Thanks, sorry I missed that! |
phpactor/phpactor | 602202954 | Title: static doesn't behave as static
Question:
username_0: In this project we have a `Model` class that has this method:
```php
/**
* @param $id
* @param $class
*
* @return static
*/
public static function fromId($id, $class = '')
```
And we have a `PaymentToken` that extends `Model`:
```php
$paymentToken = PaymentToken::fromId(data_get($response, 'data.id'));
$i->assertEquals(11, $paymentToken->getCreditCardExpirationMonth());
```
When I attempt go Go To Definition of `getCreditCardExpirationMonth()` I get this error:
```
Error from Phpactor: Class "A3\Models\Model" has no method named "getCreditCardExpirationMonth", has: "init", "record", "getNoun", "get", "lo
g", "staticLog", "sqlError", "sqlErrorStatic", "__construct", "snapshotFieldValues", "getDirtyFields", "getInitialFieldValues", "isDirty", "i
sFieldDirty", "areFieldsDirty", "fromId", "fromCriteria", "addChange", "changeCount", "isValid", "sqlFrom", "validKeyValue", "rawSqlFrom", "f
indWithSql", "replaceRawFields", "findWithStatement", "refresh", "decodeAndHealJsonField", "getPropertyDateTime", "getPropertyDate", "getProp
ertyBoolean", "getPropertyArray", "getPropertyModel", "getPropertyEnum", "getMetadata", "setMetadata", "setMetadataWithKey", "removeMetadataK
ey", "lookupObjectProperties", "castProperties", "getId", "setId", "getAppId", "setAppId", "setFieldReferences", "updateWithProperties", "sto
re", "logChanges", "storeWithoutAction", "create", "createWithStatement", "update", "updateWithStatement", "isValidId", "getModelId", "getMod
elFromObjectOrId", "getIdFromObjectOrId", "validModel", "userOwnsRecord", "validatePropertiesMatch", "delete", "removeRecursiveFields", "getF
ields", Word "GivingHistory\$paymentToken->getCreditCardExpirationMonth" could not be resolved to a class
``` |
opendatahub-io/odh-manifests | 787245595 | Title: Tensorflow notebook build points to s2i-minimal folder
Question:
username_0: https://github.com/opendatahub-io/odh-manifests/blob/a56ccf24885533ff96667ae223f933d3fd66c53c/jupyterhub/notebook-images/overlays/build/tensorflow-notebook-buildconfig.yaml#L14
Answers:
username_1: This should be fixed in PR #264
Status: Issue closed
|
bendelonlee/little_shop | 391181772 | Title: Epic: Navigation
Question:
username_0: ## Navigation
This series of stories will set up a navigation bar at the top of the screen and present links and information to users of your site.
There is no requirement that the nav bar be "locked" to the top of the screen.
### Completion of these stories will encompass the following ideas:
- the navigation is built into app/views/layouts/application.html.erb or loaded into that file as a partial
- you write a single set of tests that simply click on a link and expect that your current path is what you expect to see
- your nav tests don't need to check any content on the pages, just that current_path is what you expect
You will need to set up some basic routing and empty controller actions and empty action view files.
Status: Issue closed
Answers:
username_0: ## Navigation
This series of stories will set up a navigation bar at the top of the screen and present links and information to users of your site.
There is no requirement that the nav bar be "locked" to the top of the screen.
### Completion of these stories will encompass the following ideas:
- the navigation is built into app/views/layouts/application.html.erb or loaded into that file as a partial
- you write a single set of tests that simply click on a link and expect that your current path is what you expect to see
- your nav tests don't need to check any content on the pages, just that current_path is what you expect
You will need to set up some basic routing and empty controller actions and empty action view files.
child of #77
Status: Issue closed
|
web3j/web3j-maven-plugin | 243233454 | Title: Inconsistent behavior in a OSX context
Question:
username_0: Hello,
I've cloned the 0.1.2 version and tried a mvn clean install.
Here is my mvn banner (-version):
**Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T21:39:06+02:00)
Maven home: /Users/oschmitt/java/tools/maven/apache-maven-3.5.0
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre
Default locale: fr_FR, platform encoding: UTF-8
OS name: "mac os x", version: "10.12.5", arch: "x86_64", family: "mac"**
One test does not pass, an assertion failed with a strange compiler output :
**/private/<KEY>solc/solc: /private/<KEY>/solc/solc: cannot execute binary file**
It's the SolidityCompilerTest and the compileContract method.
If if run the test alone with IntelliJ for instance, the test passes.
Regards.
Answers:
username_1: Hi,
On the unixed based buidserver (see https://travis-ci.org/web3j/web3j-maven-plugin) the test are working. It looks like, that the permission on the file is *wrong* (not `chmod +x`). Maybe maven, which downloads the file, has not the right permission.
Can you run maven with the "--debug" flag and provide me the output?
username_0: Hello,
i've forked your repo and put it on travis with an OS X env.
https://travis-ci.org/username_0/web3j-maven-plugin
The --debug flag is set in mvn command.
Here is my fork:
https://github.com/username_0/web3j-maven-plugin
I've added a system.out.println() to output compiler errors in the console.
Connor should contact you about what i've done myself with web3j, I did not see your plugin and I built one with a different approach.
Thanks.
username_0: I've removed the system.out from SolidityCompilerTest to avoid side effects.
The code is now the same as yours (unless I missed some), .travis.yml is tailored for OS X of course.
username_1: I looks like, that maven is executing
* the wrong version of solc (https://superuser.com/questions/724301/how-to-solve-bash-cannot-execute-binary-file)
* or the file `solc` has the wrong permission for the executing agent.
Can you try to remove `sudo: false` in your travis.yml file on your fork?
username_0: Yes, I just removed the sudo : false.
Now, it's sudo: true.
It does not change the status: https://travis-ci.org/username_0/web3j-maven-plugin/jobs/254416423
Have you noticed that the test case passed when I ran it directly through IntelliJ ?
Maybe the test cases are not isolated, but it should be a problem with Linux OS too, it's not.
Anyway, one should not be forced to use sudo to run mvn install or web3j app.
My guess it's the nasty native related bug, not obvious to pinpoint.
Regards.
username_1: as you can see, i played around a lot. Unfortunately unsuccessful. But I have some hints:
I think the root cause is following message
`objc[1162]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home/jre/bin/java (0x10fe0f4c0) and /Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home/jre/lib/libinstrument.dylib (0x1121bb4e0). One of the two will be used. Which one is undefined.
`
It looks like a bug in the current jdk
https://stackoverflow.com/questions/20794751/class-javalaunchhelper-is-implemented-in-both-one-of-the-two-will-be-used-whic
username_0: I've done some research and this bug is known as cosmetic as far I understand.
https://bugs.openjdk.java.net/browse/JDK-8022291
First registered in 2013 in the bug tracking service of OpenJDK.
I had that message for years on my mac without any inconvenience.
I don't see the link between the permission message error and the JavaLaunchHelper warning message.
When I run the test case alone through IntelliJ, it passes and it get the same message :
/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/bin/java -ea -Didea.test.cyclic.buffer.size=1048576 "-javaagent:/Applications/IntelliJ IDEA 2017.2 EAP.app/Contents/lib/idea_rt.jar=49491:/Applications/IntelliJ IDEA 2017.2 EAP.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath "/Applications/IntelliJ IDEA 2017.2 EAP.app/Contents/lib/idea_rt.jar:/Applications/IntelliJ IDEA 2017.2 EAP.app/Contents/plugins/junit/lib/junit-rt.jar:/Applications/IntelliJ IDEA 2017.2 EAP.app/Contents/plugins/junit/lib/junit5-rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/cldrdata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/jaccess.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/nashorn.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/jfxswt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/lib/ant-javafx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/lib/dt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/lib/javafx-mx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/lib/jconsole.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/lib/packager.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/lib/sa-jdi.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/lib/tools.jar:/Users/oschmitt/Downloads/web3j-maven-plugin-web3j-maven-plugin-0.1.2/target/test-classes:/Users/oschmitt/Downloads/web3j-maven-plugin-web3j-maven-plugin-0.1.2/target/classes:/Users/oschmitt/.m2/repository/org/apache/maven/maven-plugin-api/3.5.0/maven-plugin-api-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-model/3.5.0/maven-model-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-artifact/3.5.0/maven-artifact-3.5.0.jar:/Users/oschmitt/.m2/repository/org/eclipse/sisu/org.eclipse.sisu.plexus/0.3.3/org.eclipse.sisu.plexus-0.3.3.jar:/Users/oschmitt/.m2/repository/javax/enterprise/cdi-api/1.0/cdi-api-1.0.jar:/Users/oschmitt/.m2/repository/javax/annotation/jsr250-api/1.0/jsr250-api-1.0.jar:/Users/oschmitt/.m2/repository/org/eclipse/sisu/org.eclipse.sisu.inject/0.3.3/org.eclipse.sisu.inject-0.3.3.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-core/3.5.0/maven-core-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-settings/3.5.0/maven-settings-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-settings-builder/3.5.0/maven-settings-builder-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-builder-support/3.5.0/maven-builder-support-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-repository-metadata/3.5.0/maven-repository-metadata-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-model-builder/3.5.0/maven-model-builder-3.5.0.jar:/Users/oschmitt/.m2/repository/com/google/guava/guava/20.0/guava-20.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-resolver-provider/3.5.0/maven-resolver-provider-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/resolver/maven-resolver-impl/1.0.3/maven-resolver-impl-1.0.3.jar:/Users/oschmitt/.m2/repository/org/apache/maven/resolver/maven-resolver-api/1.0.3/maven-resolver-api-1.0.3.jar:/Users/oschmitt/.m2/repository/org/apache/maven/resolver/maven-resolver-spi/1.0.3/maven-resolver-spi-1.0.3.jar:/Users/oschmitt/.m2/repository/org/apache/maven/resolver/maven-resolver-util/1.0.3/maven-resolver-util-1.0.3.jar:/Users/oschmitt/.m2/repository/org/apache/maven/shared/maven-shared-utils/3.1.0/maven-shared-utils-3.1.0.jar:/Users/oschmitt/.m2/repository/com/google/inject/guice/4.0/guice-4.0-no_aop.jar:/Users/oschmitt/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/Users/oschmitt/.m2/repository/aopalliance/aopalliance/1.0/aopalliance-1.0.jar:/Users/oschmitt/.m2/repository/org/codehaus/plexus/plexus-interpolation/1.24/plexus-interpolation-1.24.jar:/Users/oschmitt/.m2/repository/org/codehaus/plexus/plexus-utils/3.0.24/plexus-utils-3.0.24.jar:/Users/oschmitt/.m2/repository/org/codehaus/plexus/plexus-classworlds/2.5.2/plexus-classworlds-2.5.2.jar:/Users/oschmitt/.m2/repository/org/codehaus/plexus/plexus-component-annotations/1.7.1/plexus-component-annotations-1.7.1.jar:/Users/oschmitt/.m2/repository/org/sonatype/plexus/plexus-sec-dispatcher/1.4/plexus-sec-dispatcher-1.4.jar:/Users/oschmitt/.m2/repository/org/sonatype/plexus/plexus-cipher/1.4/plexus-cipher-1.4.jar:/Users/oschmitt/.m2/repository/org/apache/commons/commons-lang3/3.5/commons-lang3-3.5.jar:/Users/oschmitt/.m2/repository/org/apache/maven/plugin-tools/maven-plugin-annotations/3.5/maven-plugin-annotations-3.5.jar:/Users/oschmitt/.m2/repository/org/apache/maven/shared/file-management/3.0.0/file-management-3.0.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/shared/maven-shared-io/3.0.0/maven-shared-io-3.0.0.jar:/Users/oschmitt/.m2/repository/org/ethereum/solcJ-all/0.4.8/solcJ-all-0.4.8.jar:/Users/oschmitt/.m2/repository/org/web3j/core/2.2.1/core-2.2.1.jar:/Users/oschmitt/.m2/repository/org/apache/httpcomponents/httpclient/4.5.2/httpclient-4.5.2.jar:/Users/oschmitt/.m2/repository/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar:/Users/oschmitt/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/Users/oschmitt/.m2/repository/commons-codec/commons-codec/1.9/commons-codec-1.9.jar:/Users/oschmitt/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar:/Users/oschmitt/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.8.0/jackson-annotations-2.8.0.jar:/Users/oschmitt/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.8.5/jackson-core-2.8.5.jar:/Users/oschmitt/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.54/bcprov-jdk15on-1.54.jar:/Users/oschmitt/.m2/repository/com/lambdaworks/scrypt/1.4.0/scrypt-1.4.0.jar:/Users/oschmitt/.m2/repository/com/squareup/javapoet/1.7.0/javapoet-1.7.0.jar:/Users/oschmitt/.m2/repository/io/reactivex/rxjava/1.2.4/rxjava-1.2.4.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jnr-unixsocket/0.15/jnr-unixsocket-0.15.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jnr-ffi/2.1.2/jnr-ffi-2.1.2.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jffi/1.2.14/jffi-1.2.14.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jffi/1.2.14/jffi-1.2.14-native.jar:/Users/oschmitt/.m2/repository/org/ow2/asm/asm/5.0.3/asm-5.0.3.jar:/Users/oschmitt/.m2/repository/org/ow2/asm/asm-commons/5.0.3/asm-commons-5.0.3.jar:/Users/oschmitt/.m2/repository/org/ow2/asm/asm-analysis/5.0.3/asm-analysis-5.0.3.jar:/Users/oschmitt/.m2/repository/org/ow2/asm/asm-tree/5.0.3/asm-tree-5.0.3.jar:/Users/oschmitt/.m2/repository/org/ow2/asm/asm-util/5.0.3/asm-util-5.0.3.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jnr-x86asm/1.0.2/jnr-x86asm-1.0.2.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jnr-constants/0.9.6/jnr-constants-0.9.6.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jnr-enxio/0.14/jnr-enxio-0.14.jar:/Users/oschmitt/.m2/repository/com/github/jnr/jnr-posix/3.0.33/jnr-posix-3.0.33.jar:/Users/oschmitt/.m2/repository/junit/junit/4.12/junit-4.12.jar:/Users/oschmitt/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/Users/oschmitt/.m2/repository/org/apache/maven/plugin-testing/maven-plugin-testing-harness/3.3.0/maven-plugin-testing-harness-3.3.0.jar:/Users/oschmitt/.m2/repository/commons-io/commons-io/2.2/commons-io-2.2.jar:/Users/oschmitt/.m2/repository/org/codehaus/plexus/plexus-archiver/2.2/plexus-archiver-2.2.jar:/Users/oschmitt/.m2/repository/org/codehaus/plexus/plexus-container-default/1.0-alpha-9-stable-1/plexus-container-default-1.0-alpha-9-stable-1.jar:/Users/oschmitt/.m2/repository/classworlds/classworlds/1.1-alpha-2/classworlds-1.1-alpha-2.jar:/Users/oschmitt/.m2/repository/org/codehaus/plexus/plexus-io/2.0.4/plexus-io-2.0.4.jar:/Users/oschmitt/.m2/repository/org/apache/maven/maven-compat/3.5.0/maven-compat-3.5.0.jar:/Users/oschmitt/.m2/repository/org/apache/maven/wagon/wagon-provider-api/2.12/wagon-provider-api-2.12.jar" com.intellij.rt.execution.junit.JUnitStarter -ideVersion5 -junit4 org.web3j.mavenplugin.solidity.SolidityCompilerTest
**objc[721]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/bin/java (0x1068334c0) and /Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/libinstrument.dylib (0x1068fb4e0). One of the two will be used. Which one is undefined.**
Status: Issue closed
username_1: After a lot of commit noise and help from @kabl:
The issue was an unittest, which had not revert the systemproperty "os.image".
See the new version of `SolCTest.java`
Sorry for that! |
fprimex/zdesk | 53908165 | Title: patch.hc_comment_id_id was generated in the wrong direction
Question:
username_0: While the code generation appeared to be completed correctly, the diff to store the change to the documentation that was made was done in the wrong direction. The purpose of this patch was to fix the repeated {id} in the URL. The line that should be present after patching is this one:
DELETE /api/v2/help_center/articles/{article_id}/comments/{id}.json
instead of this one:
DELETE /api/v2/help_center/articles/{id}/comments/{id}.json
Status: Issue closed
Answers:
username_0: Fixed with 2.1.0. |
bazelbuild/bazel | 1067553955 | Title: Bazel server artifacts become corrupted after 1 year
Question:
username_0: ### Description of the problem / feature request:
Bazel launch process includes the following logic:
- If install dir doesn’t exist, extract jars from Bazel binary and put them to the install dir, then bless all files (set mtime to 10 years later).
- If install dir does exist, check mtime is later than 9 years for all files. If not report `corrupt installation`.
This logic means if Bazel server artifacts were extracted more than a year ago, `bazel` command will report `corrupt installation`. This behavior is not configurable and not clearly documented.
Improvement could be either adding explanation of "1 year corruption" to [this section](https://github.com/bazelbuild/bazel/blob/bdd5d49386e1e7a2e50f7e27cc8439ce4ee997a0/site/docs/guide.md?plain=1#L697), change [distant_future_](https://github.com/bazelbuild/bazel/blob/cbf3ff91f77d86ba65a07afd74d95e2b88590c8b/src/main/cpp/util/file_posix.cc#L441) to a larger number, or add Bazel client option to force re-extract.
### What underlying problem are you trying to solve with this feature?
It's confusing when a working system suddenly stops working without any changes. For build servers this might be a common problem if Bazel hasn't been upgraded within a year.
### What operating system are you running Bazel on?
Ubuntu 16.04 Xenial
### What's the output of `bazel info release`?
`release 3.7.0`
### Have you found anything relevant by searching the web?
- Didn't find anything related to this in doc. The understanding is based on some basic code readings.
- https://github.com/bazelbuild/bazel/issues/471
- https://github.com/bazelbuild/bazel/issues/5301
Answers:
username_1: any traction on this error? how do I pull new artifacts? Where are the server artifacts stored so I can remove them and start fresh? |
jpadilla/pyjwt | 786150937 | Title: jwt.jws_api.PyJWS.decode_complete shouldn't accept kwargs argument
Question:
username_0: Here we are using flask_jwt_extended. There is a call in it that calls jwt.api_jwt.PyJWT.decode with a now disapeared parameter. The problem is that jwt.api_jwt.PyJWT.decode accepts any named parameters, and forwards them to jwt.jws_api.PyJWS.decode_complete. So a call that should fail is working but doing the wrong thing.
Moreover kwargs in jwt.jws_api.PyJWS.decode_complete isn't used at all. It looks suspicious.
## Expected Result
If an API caller calls a function with bad parameter, an exception is raised immediately. **kwargs shouldn't be used as garbage parameters collection.
## Actual Result
An exception is raised a few line later, masking the real problem of the wrong usage of the API
## Reproduction Steps
```python
import jwt
# does not raise even if prout has never been a valid parameter.
unverified_claims = jwt.decode(
encoded_token, verify=False, prout="bar",
)
```<issue_closed>
Status: Issue closed |
jlippold/tweakCompatible | 588491491 | Title: `NewTerm (iOS 10 â 13)` working on iOS 13.3
Question:
username_0: ```
{
"packageId": "ws.hbang.newterm2",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "ws.hbang.newterm2",
"deviceId": "iPhone11,2",
"url": "http://cydia.saurik.com/package/ws.hbang.newterm2/",
"iOSVersion": "13.3",
"packageVersionIndexed": true,
"packageName": "NewTerm (iOS 10 â 13)",
"category": "Terminal Support",
"repository": "Chariz",
"name": "NewTerm (iOS 10 â 13)",
"installed": "2.4",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 13 working reports.",
"id": "ws.hbang.newterm2",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "A powerful terminal app for iOS",
"latest": "2.4",
"author": "<NAME>",
"packageStatus": "Working"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
syscoin/blockmarket-desktop-public | 228209847 | Title: Safe Search is ambiguous
Question:
username_0: Not clear if it means:
1) This item is safe for all to search, or
2) This item should be hidden by the Safe Search filter
It is named differently on New Cert tab and New offer Tab:


Answers:
username_1: retest in beta2, note that global ss filter not in yet, test for other aspects of this issue
Status: Issue closed
username_2: wording change, closing #55 |
realm/realm-studio | 431369437 | Title: Users wants to choose the sync worker when creating a Realm on a server
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A user might want to place a new Realm on a specific sync worker.
**Describe the solution you'd like**
When creating a new Realm on a server, it would be nice to be able to optionally enter the label of the sync worker on which the Realm should be placed. Bonus points if the sync label auto-completes to values already in the `/__admin` Realm.
**Describe alternatives you've considered**
Manually editing the `/__admin` Realm before creating the Realm. |
NASA-AMMOS/AIT-Core | 1129217479 | Title: Consider adding a switch that disables the standard Config warnings
Question:
username_0: Ideally for a project, the config.yaml is finely tuned and warnings should not be emiited.
For dev however, we are usually stuck with the same default config.yaml, which means we see these WARNINGS everytime we run AIT:
2022-02-09T15:19:39.982 | WARNING | AIT_ROOT not set. Defaulting to "/home/username_0/dev/ait/ait_encrypt_111621/AIT-DSN"
2022-02-09T15:19:40.061 | WARNING | Config parameter command.history.filename specifies nonexistent path /home/username_0/dev/cmdhist.pcap
2022-02-09T15:19:40.062 | WARNING | Config parameter sequence.directory specifies nonexistent path /home/username_0/dev/seq
2022-02-09T15:19:40.063 | WARNING | Config parameter script.directory specifies nonexistent path /home/username_0/dev/script
2022-02-09T15:19:40.063 | WARNING | Config parameter cmddict.filename specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/cmd.yaml
2022-02-09T15:19:40.064 | WARNING | Config parameter evrdict.filename specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/evr.yaml
2022-02-09T15:19:40.064 | WARNING | Config parameter tlmdict.filename specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/tlm.yaml
2022-02-09T15:19:40.065 | WARNING | Config parameter limits.filename specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/limits/limits.yaml
2022-02-09T15:19:40.065 | WARNING | Config parameter table.filename specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/table.yaml
2022-02-09T15:19:40.066 | WARNING | Config parameter bsc.filename specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/bsc.yaml
2022-02-09T15:19:40.066 | WARNING | Config parameter dsn.cfdp.mib.path specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/mib
2022-02-09T15:19:40.067 | WARNING | Config parameter dsn.cfdp.datasink.outgoing.path specifies nonexistent path /home/username_0/dev/ait/ait/dsn/cfdp/datasink/outgoing
2022-02-09T15:19:40.067 | WARNING | Config parameter dsn.cfdp.datasink.incoming.path specifies nonexistent path /home/username_0/dev/ait/ait/dsn/cfdp/datasink/incoming
2022-02-09T15:19:40.068 | WARNING | Config parameter dsn.cfdp.datasink.tempfiles.path specifies nonexistent path /home/username_0/dev/ait/ait/dsn/cfdp/datasink/tempfiles
2022-02-09T15:19:40.068 | WARNING | Config parameter dsn.cfdp.datasink.pdusink.path specifies nonexistent path /home/username_0/dev/ait/ait/dsn/cfdp/datasink/pdusink
2022-02-09T15:19:40.069 | WARNING | Config parameter leapseconds.filename specifies nonexistent path /home/username_0/dev/ait/kmc_cfgs/leapseconds.dat
2022-02-09T15:19:40.069 | WARNING | Config parameter data.1553.path specifies nonexistent path /gds/dev/data/mipldevlinux14/2022/2022-040/downlink/1553
2022-02-09T15:19:40.070 | WARNING | Config parameter data.bad.path specifies nonexistent path /gds/dev/data/mipldevlinux14/2022/2022-040/downlink/bad
2022-02-09T15:19:40.070 | WARNING | Config parameter data.lehx.path specifies nonexistent path /gds/dev/data/mipldevlinux14/2022/2022-040/downlink/lehx
2022-02-09T15:19:40.071 | WARNING | Config parameter data.planning.path specifies nonexistent path /gds/dev/data/mipldevlinux14/2022/2022-040/planning
2022-02-09T15:19:40.071 | WARNING | Config parameter data.sdos.path specifies nonexistent path /gds/dev/data/mipldevlinux14/2022/2022-040/sdos
2022-02-09T15:19:40.072 | WARNING | Config parameter data.uplink.path specifies nonexistent path /gds/dev/data/mipldevlinux14/2022/2022-040/uplink
2022-02-09T15:19:40.072 | WARNING | Config parameter data.ats.path specifies nonexistent path /gds/dev/data/mipldevlinux14/2022/2022-040/ats
It would be nice to be able to silence these - especially when running quick demos!
Putting it in the config itself seems dubious, but maybe an ENVAR? Or are those frowned upon? |
Gibberlings3/iesdp | 345527526 | Title: itm file
Question:
username_0: Itm file format: "minimum level" offset 0x0024 - do not work? (BG2:ToB)
Answers:
username_1: I don't see anyone discussing it on the forum, but considering I've also never seen any items relying on it, it seems everyone agreed it was a silly concept.
Status: Issue closed
username_1: Do you have the code? No. Can you read (dis)assembly? Then check the code, it's the only way to be sure. Simple testing can probably give a good enough answer too, depending on what you are after.
username_1: Itm file format: "minimum level" offset 0x0024 - do not work? (BG2:ToB)
username_1: Sounds like a broken block, something only binary patching could fix. I doubt any of the existing ones did, since there are no users.
If you're familiar with html, please open up a pull request with a fix for the text. Something like appending "(crashes in bg2 once requirement met)".
username_0: You may make changes to the description on your behalf. |
jedib0t/go-pretty | 907678962 | Title: increase row distance (padding)
Question:
username_0: **Describe the bug**
Apologies if this is documented and I could not find it: I would like to increase distance between consecutive rows, namely allow for bigger padding between one row and the other.
**To Reproduce**
Looking at [these properties](https://github.com/username_1/go-pretty/blob/11849e40fe2d5c5e0c68bf6cbbb803a93e524321/table/table.go#L27) it seems only `allowedRowLength` is defined, likewise I could not find an equivalent of `t.SetColumnConfig` for rows.
**Expected behavior**
Does such option exist at all and what would you recommend in alternative?
Answers:
username_1: At the moment, there is no way to increase the "padding"/distance between consecutive rows. You may want to just introduce newlines in your text to introduce an artificial empty line.
username_0: I see, thank you for looking into it!
Status: Issue closed
|
microsoft/winget-cli | 644143182 | Title: Source management should require administrator privileges
Question:
username_0: # Description of the new feature/enhancement
To prevent changes to sources from swapping out the target package to install, the source management functionality should require administrator privileges. This will prevent an unelevated attacker from tricking the user into elevating an installer.<issue_closed>
Status: Issue closed |
sarl/sarl | 805458758 | Title: Hazelcast 4.x support
Question:
username_0: Hello, I've been encountering an issue when trying to move my solution to Hazelcast 4.x. Last time I used SARL it was with Hazelcast 3.12 as I needed to communicate with Python client that did not yet support version 4.
**Is your feature request related to a problem? Please describe.**
After a few hours trying to figure out where the problem actually was, I found out it happened only when starting my application from a SARL context. When running this sample code in Java, everything ran smoothly:
```
public class TopicSample implements MessageListener<String> {
@Override
public void onMessage(Message<String> message) {
System.out.println("Got message " + message.getMessageObject());
}
public static void main(String[] args) {
Config config = new Config();
// Start the Embedded Hazelcast Cluster Member.
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
// Get a Topic called "my-distributed-topic"
ITopic<String> topic = hz.getTopic("my-distributed-topic");
// Add a Listener to the Topic
topic.addMessageListener(new TopicSample());
// Publish a message to the Topic
topic.publish("Hello to distributed world");
// Shutdown the Hazelcast Cluster Member
hz.shutdown();
}
}
```
However when I ran this equivalent agent, I got a problem when I try to get the hazelcast instance:
```
agent SimpleAgent {
on Initialize {
var hcConfig = new Config
var hcInstance = Hazelcast.newHazelcastInstance()
var topic = hcInstance.getTopic("test")
topic.addMessageListener(new MessageListener<String>(){
def onMessage(message : Message<String>) {
println("Got message " + message.getMessageObject)
}
})
topic.publish("Hello to distributed world")
hcInstance.shutdown
}
}
```
I get the following Stacktrace:
```
[SEVERE, 12:40:13pm, com.hazelcast.instance.impl.Node] [172.16.31.10]:5702 [dev] [4.1.1] Node creation failed
java.lang.NoSuchMethodError: com.hazelcast.internal.serialization.impl.ArrayDataSerializableFactory.<init>([Lcom/hazelcast/util/ConstructorFunction;)V
at com.hazelcast.mapreduce.aggregation.impl.AggregationsDataSerializerHook.createFactory(AggregationsDataSerializerHook.java:432)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.<init>(DataSerializableSerializer.java:68)
at com.hazelcast.internal.serialization.impl.SerializationServiceV1.<init>(SerializationServiceV1.java:145)
at com.hazelcast.internal.serialization.impl.SerializationServiceV1$Builder.build(SerializationServiceV1.java:405)
at com.hazelcast.internal.serialization.impl.DefaultSerializationServiceBuilder.createSerializationService(DefaultSerializationServiceBuilder.java:300)
at com.hazelcast.internal.serialization.impl.DefaultSerializationServiceBuilder.build(DefaultSerializationServiceBuilder.java:236)
at com.hazelcast.internal.serialization.impl.DefaultSerializationServiceBuilder.build(DefaultSerializationServiceBuilder.java:55)
[Truncated]
at be.uclouvain.aptitude.test.SimpleAgent.$behaviorUnit$Initialize$0(SimpleAgent.java:22)
at be.uclouvain.aptitude.test.SimpleAgent.lambda$0(SimpleAgent.java:31)
at io.janusproject.kernel.bic.internaleventdispatching.AgentInternalEventsDispatcher$1.run(AgentInternalEventsDispatcher.java:295)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
```
**Describe the solution you'd like**
I may be wring but I assume that the problem is that somehow that Hazelcast 4 is not yet supported. I first thought the problem came from Hazelcast side, but as it is very simple code and it was released several months ago, this now seems very unlikely and I found no such errors on the internet.
So that would be nice to support Hazelcast 4 in a next version. If you have any other clew, I'm all ears of course!
**Describe alternatives you've considered**
I tried instantiating Hazelcast from a java class instead and get the instance from a SARL class, but I got the same exception so I guess it is related to SARL context.
So at the moment I moved back my code to target Hazelcast 3.12.12.
**Additional context**
I'm using SARL 10.1 with JDK 11.0.7 on Windows 10 64 bits
Answers:
username_1: It seems that there is different versions of Hazelcast used in the two cases.
Are you using Maven for building your project? In this case, you could force the version of Hazelcast in your `pom.xml` file.
If you are not using Maven (but using a simple SARL project), then SARL run-time libraries are added to the class path on-the-fly when you launch your application (The run-time libraries replace the SARL libraries in the project definition). The run-time libraries includes an hard-coded version of Hazelcast. You may have to change the order of the libraries in your project for having first the Jar files you have added by hand (the lastest version of Hazelcast) before the SARL libraries.
I assume that you launch your application with "SARL Agent" or "SARL Application" launch configurations.
username_0: I'm actually using Maven. In both cases I used the same project where I had the version 4.1.1 in my `pom.xml`.
For the sample Java code I used Run as > Java Application whereas for the SARL I launched a Java Application using a run configuration (with io.sarl.bootstrap.SRE as the main class). This seems to be the only difference, except if the generated java code is actually the problem.
username_1: There is no difference between Java and SARL regarding the access to the class files into the classpath, since SARL code is transformed to Java code before compiling to binary code. And becasue you are using "Run As Java Application" for both cases, the classpath should be the same for the both applications (assuming the two code are in the same project, otherwise, you have to double check that the two projects have exactly the same definition).
I saw a difference between your two code: in the SARL code you create an Hazelcast instance without argument.
Is it should be?
```sarl
var hcInstance = Hazelcast.newHazelcastInstance(hcConfig)
```
username_1: The problem may be due to the fact that SARL `0.10.1` embeds Hazelcast in order to set up the network between the Janus nodes. It seems that there is some conflict between the two versions of Hazelcast.
If you plan to start an Hazelcast instance from within your agents, I think it is better to avoid the SARL run-time environment to start its own Hazelcast instance. Indeed, only one instance of Hazelcast is created per instance of Java virtual machine.
So that, I recommend to disable the networking feature of the SARL run-time environment by passing `--offline` as application command-line argument (into the Eclipse launch configuration).
Please indicate to us if there is a diffrence of behavior.
username_0: 
So I added this to the program argument. Unfortunately, I obtain the same behavior.
username_1: The `--offline` argument does not works when you launch an application using [SRE.main()].(https://github.com/sarl/sarl/blob/814788b81629cff3fb453f7433cb1eef4307c596/main/coreplugins/io.sarl.lang.core/src/io/sarl/bootstrap/SRE.java#L153)
Indeed, all the arguments are given to the launched agent, not to the SARL run-time environment.
If I understood well your previous messages, your're using `SRE.main()`.
In order to make a quick test, could you use the following main function:
```sarl
class MyMain {
static def main(args : String[]) : void {
val type = Class::forName(args.get(0)) as Class<? extends Agent>
val bs = SRE::bootstrap
bs.offline = true
bs.startAgent(type)
}
}
```
username_0: Indeed, I was using `SRE.main()`. I changed the main class for this one and give my SimpleAgent as argument but I still get the same stacktrace :(
username_1: Ok; I will drawngrade my version of SARL and try to reproduce the issue from my side.
username_0: Thank you for your help and time!
If you know it would work well with the current version and I can also upgrade mine, but I had some issues a few months ago with 0.11 and my JDK, but this is probably solved ever since.
username_1: Version 0.12 will be released in May 2021. Hazelcast is not any more included by default. It is inside a plugin that could be easily plugged in/out from your project.
I think it is preferable for you to skip 0.11 if you have encountered issues with it.
Status: Issue closed
|
acdh-oeaw/apis-core | 495094630 | Title: URIs – Delete and Merge behaviour
Question:
username_0: The following features are necessary:
+ Ability to delete URIs
+ When merging two entries the URIs are not added to the URI-field. Thus it does not recognize already existing entries based on the URIs.
Status: Issue closed
Answers:
username_1: When merging URIs are now moved to the URIs field.
There is the possibility to search for URIs and delete them: https://pmb.acdh.oeaw.ac.at/apis/metainfo/apis/metainfo/uri/
We might want to move that to the entities forms, but for the moment that should be sufficient. Closing |
Canadensys/vascan-data | 67777485 | Title: authorities of species
Question:
username_0: [Originally posted on GoogleCode (id 1303) on 2012-04-09]
<b>(This is the template to report a data issue for Vascan. If you want to</b>
<b>report another issue, please change the template above.)</b>
<b>What is the URL of the page where the problem occurs?</b>
http://data.canadensys.net/vascan/taxon/9804
<b>What data are incorrect or missing?</b>
Potamogeton ×methyensis Bennett
<b>What data are you expecting instead?</b>
Potamogeton ×methyensis (A.Bennett) A.Bennett
basionym: Potamogeton angustifolius Berchtold & J.Presl var. methyensis A.Bennett
<b>If applicable, please provide an authoritative source.</b>
Tropicos, IPNI
Answers:
username_1: [Originally posted on GoogleCode on 2012-04-09 19:15Z]
basio added
Status: Issue closed
|
swat-hopper/hopper-frontend | 691583445 | Title: Implement ChalengeCard teacher variants
Question:
username_0: [Design](https://www.figma.com/file/5NeVIdpu0VfQpyzYHZ6cwq/Hopper_Prototype?node-id=77%3A2)
[Markup 1](https://github.com/swat-hopper/markups/tree/master/zwat/8.%20modal%20profesor)
[Markup 2](https://github.com/swat-hopper/markups/tree/master/zwat/9.%20modal%20profesor%20por%20calificar) |
homepaycompany/homepay_rails_monolyth | 306529440 | Title: Description of the pool of investors
Question:
username_0: **Summary:** Description of the pool of investors on the How it works page
**What is the expected correct behavior?** Property vendors should understand the kind of property investors which will purchase their properties.
Answers:
username_1: @username_0 je ne vois pas les modifs, tu les as mis ou ?
username_0: C'est la question FAQ sur la landing page.
Status: Issue closed
username_1: **Summary:** Description of the pool of investors on the How it works page
**What is the expected correct behavior?** Property vendors should understand the kind of property investors which will purchase their properties.
username_1: @username_0 est ce qu'on devrait pas également mettre ça dans comment ça marche ?
username_1: @username_0 qu'est ce que tu en penses ?
Status: Issue closed
|
swagger-api/swagger-codegen | 303699213 | Title: Python swagger-codegen hard-coded base URL
Question:
username_0: <!--
Please follow the issue template below for bug reports and feature requests.
Also please indicate in the issue title which language/library is concerned. Eg: [JAVA] Bug generating foo with bar
-->
##### Description
<!-- describe what is the question, suggestion or issue and why this is a problem for you. -->
Swagger API client for python should allow for the base URL to be passed via arguments at runtime
##### Swagger-codegen version
<!-- which version of swagger-codegen are you using, is it a regression? -->
2.3.1
##### Swagger declaration file content or url
<!-- if it is a bug, a json or yaml that produces it.
If you post the code inline, please wrap it with
```yaml
(here your code)
```
(for YAML code) or
```json
(here your code)
```
(for JSON code), so it becomes more readable. If it is longer than about ten lines,
please create a Gist (https://gist.github.com) or upload it somewhere else and
link it here.
-->
Any, can use petstore.
##### Command line used for generation
<!-- including the language, libraries and various options -->
swagger-codegen generate -i swagger.yaml -l python -o client-lib
##### Steps to reproduce
<!-- unambiguous set of steps to reproduce the bug.-->
Cannot pass URL to client library
##### Related issues/PRs
<!-- has a similar issue/PR been reported/opened before? Please do a search in https://github.com/swagger-api/swagger-codegen/issues?utf8=%E2%9C%93&q=is%3Aissue%20 -->
##### Suggest a fix/enhancement
<!-- if you can't fix the bug yourself, perhaps you can point to what might be
causing the problem (line of code or commit), or simply make a suggestion -->
Instead of hard-coding the base URL like [here](https://github.com/swagger-api/swagger-codegen/blob/master/samples/client/petstore/python/petstore_api/configuration.py#L50), instead pass it as a parameter to __init__ in Configuration (or anywhere else that makes sense).
Answers:
username_1: I think you can change the base path in the python client during runtime.
username_2: Yes it can be configured at runtime
```python
api_client = ApiClient(configuration=configuration)
api_client.configuration.host = "http://whatever"
```
username_3: I do not like this particular approach. The reason? As an API author, I have an API where I encode a version/etc into the basepath so that the server can handle multiple versions concurrently/explicitly *not* handle unsupported versions.
If I bump the version of the API, I expect the version in the basepath to increase as well.
As a user of the client, I, for the most part, do not care about the details of the path to the API. I only care about where the API server is located. However, because the host includes both the base path *and* the host, I need to know the version/etc in order to construct a new host.
For example, my client has the following line in Configuration()'s constructor:
`self.host = "https://localhost/my_api/1.0.0"`
I tried changing host to something else with the expectation that the client would just work.
```
config = client_lib.Configuration()
config.host = "https://my-new-host"
```
However, I get 404s when I try to use the API. Looking at the server logs, I see accesses to stuff like `/resource` where I would expect `/my_api/1.0.0/resource`.
Right now the url is constructed as follows:
```
url = self.configuration.host + resource_path
```
I think that the host should actually be the host, and the base path something separate. I.e.
```
url = self.configuration.host + "/" + self.configuration.base_path + resource_path
```
The base_path would default to `/my_api/1.0.0` in my case.
Does that seem reasonable?
username_4: https://swagger.io/docs/open-source-tools/swagger-ui/usage/configuration/
The problem is that codegen is made to generate code. It is not responsible for environment... however, I strongly agree to updating the python code so it take a parameter to set the host url.
As an alternative, a hosts entry can solve what this ticket is providing.
username_5: Has any change to this behavior occurred? If not, I agree with needing to ask for hostname and port as a required argument when initializing the API client.
I've recently begun modeling APIs with OAS3 for on-premise for products explicitly and exclusively deployed on-premise. This means there is no SaaS to point to for a baseUrl, and that hardcoding on is not useful as it will _**always**_ result in error. Therefore, the API client should require a host/port (socket) to be passed to the API client at runtime.
I see where this can be modified in the Configuration class (for example, in the generated python library); however, I would much prefer to do it programmatically in the generator. |
dib-lab/charcoal | 623367726 | Title: does charcoal need to be run in the charcoal directory?
Question:
username_0: I installed charcoal according to the readme. When I run it in the cloned github directory, it works! When I run it in a different directory, it starts running and then on `rule contigs_clean_just_taxonomy` I get the error message:
```
Activating conda environment: /home/tereiter/github/2020-charcoal-paper/.snakemake/conda/f2fa0ad7
/home/tereiter/github/2020-charcoal-paper/.snakemake/conda/f2fa0ad7/bin/python: Error while finding module specification for 'charcoal.just_taxonomy' (ModuleNotFoundError: No module named 'charcoal')
```
The environment file `.snakemake/conda/f2fa0ad7.yaml` looks like this:
(e.g. the same as https://github.com/dib-lab/charcoal/blob/master/charcoal/conf/env-sourmash.yml)
```
channels:
- conda-forge
- bioconda
- defaults
dependencies:
- python>=3.6
- sourmash>=3.3,<4
- pytest
- pip
- pip:
- dendropy==4.4.0
```
Answers:
username_1: see https://github.com/dib-lab/charcoal/issues/45 - tl;dr do `--no-use-conda` for now.
and https://github.com/dib-lab/charcoal/issues/57 is relevant.
username_1: (pls confirm that works & close issue, or ask more questions :)
username_0: ah ok! this worked in one env, and royally borked a different one.
The original error in the borked one was `no module named screed`, so I ran `conda install sourmash`, and then got another error about a bad python 3.8 interpreter. I assume that environment just got messed up, and that if I start afresh it would work in this one too. Thank you, closing!
Status: Issue closed
username_1: excellent! |
cohorte/cohorte-runtime | 276431688 | Title: Ui looper in python isolate
Question:
username_0: Today the looper parameter can only be passed and use for java isolate but not for python.
We need to enhance it to allow to pass the looper wanted in order to be able to use it in the isolate.
In order to be use define looper property in isolate.js (e.g. b2bpos-isolate.js). And the component that handle the Gui will require the looper component<issue_closed>
Status: Issue closed |
btcsuite/btcd | 586448994 | Title: btcctl addmultisigaddress behaves differently with or without optional parameter
Question:
username_0: Using btcctl on simnet and connecting to wallet using RPC.
`btcctl --simnet --wallet --rpcuser=xxx --rpcpass=xxx addmultisigaddress 2 '["<KEY>", "<KEY>"]'`
This works correctly and responds with an address.
However, this does not work.
`btcctl --simnet --wallet --rpcuser=user --rpcpass=pass addmultisigaddress 2 '["<KEY>", "<KEY>"]' "multi"`
Error: -4: imported addresses must belong to the imported account
Facing issue when using btcd rpcclient as the address field is no longer optional in the API. It continuously gives the same error.
Answers:
username_1: Since you're importing an address, you need to always use the imported account with the way btcwallet is set up. Since this issue was created however, we've revamped the way importing pubkeys works in btcwallet (it can take xpubs now, etc), you'll be able to import public keys or xpubs and use the non-default account label.
Status: Issue closed
|
ADD-SP/ngx_waf | 1077761643 | Title: 宝塔Nginx安装出错
Question:
username_0: * 如何触发错误。
使用宝塔安装nginx的时候
* ngx_waf 的版本/分支。
Current 版
* `nginx -V` 命令的输出内容。
未安装
* 调试日志([如何获取调试日志](#如何获取调试日志))。
未安装
* 出错时 `shell` 的输出内容。
./configure: error: the ngx_http_waf_module module requires the uthash library.
Please run:
cd /www/server/nginx/src/ngx_waf \
&& git clone -b v2.3.0 https://github.com/troydhanson/uthash.git lib/uthash \
&& cd /www/server/nginx/src
## 可以提供的信息
* 操作系统,包括名称和版本。
CentOS 7.9.2009
我尝试使用 yum install uthash-devel 安装 但是无效
此外,经过检查,在nginx安装过程中uthash是按照前置脚本成功clone到 /www/server/nginx/src/ngx_waf/uthash 目录的
--------------------------------
此外,在执行文档的 “在 shell 中运行命令” 的时候,出现了一些错误
根据报错和官方文档我安装了一些其他依赖解决了问题
yum install libtool
yum install https://archives.fedoraproject.org/pub/archive/fedora/linux/updates/23/x86_64/b/bison-3.0.4-3.fc23.x86_64.rpm
yum install gcc-c++ flex bison yajl yajl-devel curl-devel curl GeoIP-devel doxygen zlib-devel pcre-devel
参考: https://github.com/SpiderLabs/ModSecurity/wiki/Compilation-recipes-for-v3.x#centos-7-minimal
可以在文档中补充一些这些依赖
Answers:
username_1: 文档错误,应该将 `uthash` 下载到 `ngx_waf/lib` 目录下。
username_1: 文档已更新,可清理浏览器缓存后再行访问,感谢对文档的补充。
username_0: 之前的检测都通过了
但是还是有报错
这个我就完全不知道怎么解决了
麻烦帮忙看看 谢谢
cc -c -I/usr/local/include/luajit-2.0/ -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g EMP_CC_OPT -DNDK_SET_VAR -I src/core -I src/event -I src/event/modules -I src/os/unix -I /www/server/nginx/src/ngx_devel_kit/objs -I objs/addon/ndk -I /www/server/nginx/src/lua_nginx_module/src/api -I pcre-8.43 -I /www/server/nginx/src/openssl/.openssl/include -I /usr/include/libxml2 -I objs \
-o objs/src/core/ngx_log.o \
src/core/ngx_log.c
cc -c -I/usr/local/include/luajit-2.0/ -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g EMP_CC_OPT -DNDK_SET_VAR -I src/core -I src/event -I src/event/modules -I src/os/unix -I /www/server/nginx/src/ngx_devel_kit/objs -I objs/addon/ndk -I /www/server/nginx/src/lua_nginx_module/src/api -I pcre-8.43 -I /www/server/nginx/src/openssl/.openssl/include -I /usr/include/libxml2 -I objs \
-o objs/src/core/ngx_palloc.o \
src/core/ngx_palloc.c
cc: error: EMP_CC_OPT: No such file or directory
cc: error: EMP_CC_OPT: No such file or directory
cc: error: EMP_CC_OPT: No such file or directory
make[1]: *** [objs/src/core/nginx.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [objs/src/core/ngx_log.o] Error 1
make[1]: *** [objs/src/core/ngx_palloc.o] Error 1
make[1]: Leaving directory `/www/server/nginx/src'
make: *** [build] Error 2
make -f objs/Makefile install
make[1]: Entering directory `/www/server/nginx/src'
cc -c -I/usr/local/include/luajit-2.0/ -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g EMP_CC_OPT -DNDK_SET_VAR -I src/core -I src/event -I src/event/modules -I src/os/unix -I /www/server/nginx/src/ngx_devel_kit/objs -I objs/addon/ndk -I /www/server/nginx/src/lua_nginx_module/src/api -I pcre-8.43 -I /www/server/nginx/src/openssl/.openssl/include -I /usr/include/libxml2 -I objs \
-o objs/src/core/nginx.o \
src/core/nginx.c
cc: error: EMP_CC_OPT: No such file or directory
username_1: 将编译参数中的 `--with-cc-opt=$TEMP_CC_OPT` 删除。
username_0: 删掉应该不行,这样就缺了这个参数
checking for C99 features ... not found
./configure: error: the ngx_http_waf_module module requires the C99 features, make sure your C compiler supports and enables the C99 standard.
For gcc, you can enable the C99 standard by appending the parameter --with-cc-opt='-std=gnu99'.
username_1: 试试 `--with-cc-opt=\${TEMP_CC_OPT}`
username_0: 现在是报这个
./configure: error: can not detect int size
username_1: 你设置环境变量 `$TEMP_CC_OPT` 了么?
username_0: 设置了
username_0: 稍等,我在重新测试确认
username_0: 没错
我使用文档的 --with-cc-opt=$TEMP_CC_OPT
是可以 checking for int size ... 4 bytes
username_1: `echo $TEMP_CC_OPT` 的输出是?
username_0: [root@localhost ~]# echo $TEMP_CC_OPT
-std=gnu99 -Wno-sign-compare
username_0: 没错
我把 --with-cc-opt=\${TEMP_CC_OPT} 改成 `--with-cc-opt=$TEMP_CC_OPT`
再次出现
```
checking for int size ...
./configure: error: can not detect int size
```
username_1: 把 `$TEMP_CC_OPT` 的值改为 "'-std=gnu99 -Wno-sign-compare'",然后 `--with-cc-opt=\${TEMP_CC_OPT}`。
username_1: 有的,原本是单引号包裹,现在是双引号在外部包裹,首尾还有一对单引号。
username_0: 明白😊
username_0: 不行,现在卡在更前面了
checking for C compiler ... found
+ using GNU C compiler
checking for --with-ld-opt="-ljemalloc" ... not found
./configure: error: the invalid value in --with-ld-opt="-ljemalloc"
username_0: 这里才刚刚clone完 uthash和check了os
username_1: 我送宝塔的安装程序一张图。

改成 `--with-cc-opt=-std=gnu99` 试试,如果还是不行,建议找宝塔官方让他们修修自己的代码。
之前测试的时候就有这回事,宝塔的程序对编译参数处理不当。
username_0: 可以了 非常感谢
宝塔杂七杂八的问题确实多,之前被宝塔的一些插件坑了好几次
现在都只敢作为一个方便的管理工具了,否则站点多配置挺麻烦的
Status: Issue closed
username_0: ![Uploading 91A92191-4BA7-4F8C-A469-AD11E5DED7D2.jpeg…]()
还有一个小问题 这里应该缺了一个目录切换
username_0: ![Uploading 774D12B2-5221-49A5-8CA9-50EBF3DD388A.jpeg…]()
username_0: 宝塔面板
在 shell 中运行命令 部分
# 安装 ModSecurity v3 后
应该多一个目录切换
即
```
./configure --prefix=/usr/local/libmaxminddb
make -j$(nproc)
make install
# 安装 ModSecurity v3
cd /usr/local/src 增加这一行
git clone -b v3.0.5 https://github.com/SpiderLabs/ModSecurity.git
cd ModSecurity
chmod +x build.sh
./build.sh
git submodule init
```
username_1: 文档已更新,感谢。 |
counsyl/stor | 290169948 | Title: Make swift / s3 requirements optional
Question:
username_0: Turns out PBR *does* support extras-require, thus installation becomes:
```
pip install stor[s3,swift]
```
Would be nice for making a much smaller install for people that are only using s3.
Requires:
- [ ] move requirements to extras
- [ ] change imports to be optional
- [ ] add tests for useful exceptions when appropriate imports aren't available
Open questions:
Should `stor.Path('swift://AUTH_final_analysis_prod/A/C')` work when swift utilities aren't installed? (similar question about `s3://` etc) |
l5r-discord-league/discord-league | 552479232 | Title: Deck registration for Single Elimination bracket
Question:
username_0: As DL player, I want to be able to register my deck for an upcoming Single Elimination bracket.
* In the season overview (See #3) I want to see an indicator when I am qualified for a single elimination bracket of a season that is in the "end of group stage" state
* In the details view of that season I want to be able to register my decklist for the upcoming bracket.
* To register I must
** Provide a link to a valid decklist (either from fiveringsdb.com or bushibuilder.com)
** Copy and paste that decklist in a text field
** Click "save"
After submitting a decklist, I want to be able to edit the link and copied decklist as long as the season is in the "end of group stage" state<issue_closed>
Status: Issue closed |
jmapper-framework/jmapper-core | 146249255 | Title: Mapping from class derived from Template to POJO
Question:
username_0: Thanl for help!
[JMapperTester.jar.zip](https://github.com/jmapper-framework/jmapper-core/files/206114/JMapperTester.jar.zip)
Answers:
username_1: I will see as soon as possible, however the `serialVersionUID` field is automatically skipped, it isn't necessary define it in the configuration.
username_1: It works when the inherited classes belong to the mapped class, fails in case of target class. i'm going to fix it, thank you again
username_1: Fixed! you will see this fix in the next release.
Status: Issue closed
|
projectdiscovery/httpx | 976350986 | Title: Error running httpx with '%' character in path
Question:
username_0: **Describe the bug**
Related to https://github.com/projectdiscovery/httpx/issues/331
**Environment details**
Tested on httpx v1.1.2
**Error details**
This works:-
```bash
echo example.com | httpx -unsafe -path '/%invalid' -silent
```
But not this:-
```sh
echo 'example.com/%invalid' | httpx -unsafe -verbose
[DBG] Failed 'https://example.com/%invalid': parse "https://example.com/%invalid": invalid URL escape "%in"
``` |
DIYgod/RSSHub | 959111779 | Title: B站动态无法拉取
Question:
username_0: <!--请确保已阅读 [文档](https://docs.rsshub.app) 内相关部分,并按照模版提供信息,否则 issue 将被立即关闭。
由于部分源网站反爬缘故,演示地址一些 rss 会返回 status code 403,该问题不是 RSSHub 所致,请勿提交 issue。
请填写内容进code block区域,一行一个。
如果没有符合条件的内容,请填写`NOTEXIST`
-->
#### 路由地址(不包含参数)
```routes
bilibili/followings/dynamic/
```
#### 完整路由地址,包含所有必选与可选参数
```fullroutes
bilibili/followings/dynamic/412244715
```
#### 相关文档地址
- https://docs.rsshub.app/social-media.html#bilibili
#### 预期是什么
_获取b站动态_
#### 实际发生了什么
_Error message: Cannot read property 'title' of undefined_
#### 部署相关信息
<!--
如果是演示地址(rsshub.app)有此问题请删除此部分。
请确保您部署的是[主线 master 分支](https://github.com/DIYgod/RSSHub/tree/master)最新版 RSSHub。
-->
| Env | Value |
| ------------------ | ------------- |
| OS |heroku-20|
| Node version | |
| if Docker, version | |
- 额外信息(日志、报错等)
-
<issue_closed>
Status: Issue closed |
chanzuckerberg/miniwdl | 665506975 | Title: cannot use input filenames containing "{{"
Question:
username_0: This is probably a Docker issue:
```
Traceback (most recent call last):
File "/home/username_0/.local/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.35/services/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/username_0/src/czi/miniwdl/WDL/runtime/task.py", line 175, in run_local_task
_try_task(cfg, logger, container, command, terminating)
File "/home/username_0/src/czi/miniwdl/WDL/runtime/task.py", line 467, in _try_task
return container.run(logger, command)
File "/home/username_0/src/czi/miniwdl/WDL/runtime/task_container.py", line 186, in run
exit_status = self._run(logger, terminating, command)
File "/home/username_0/src/czi/miniwdl/WDL/runtime/task_container.py", line 468, in _run
svc = client.services.create(image_tag, **kwargs)
File "/home/username_0/.local/lib/python3.8/site-packages/docker/models/services.py", line 225, in create
service_id = self.client.api.create_service(**create_kwargs)
File "/home/username_0/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 34, in wrapper
return f(self, *args, **kwargs)
File "/home/username_0/.local/lib/python3.8/site-packages/docker/api/service.py", line 189, in create_service
return self._result(
File "/home/username_0/.local/lib/python3.8/site-packages/docker/api/client.py", line 267, in _result
self._raise_for_status(response)
File "/home/username_0/.local/lib/python3.8/site-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/username_0/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error: Bad Request ("rpc error: code = InvalidArgument desc = expanding mounts failed: expanding mount source "/tmp/miniwdl_test_tests.test_7runner.MiscRegressionTests.test_weird_filenames_xzs__oyo/ThisIsAVeryLongFilename abc...xzy1234567890!@{{నేనుÆды.test.ext": template: expansion:1: unexpected bad character U+0C47 'ే' in command")
```
Answers:
username_0: we could escape it using `{{ '{{' }}` :man_facepalming:
Status: Issue closed
|
Atsuhiko/Web-App | 609859060 | Title: GPU, TPUとは
Question:
username_0: ## GPU Graphics Processing Unit 画像処理装置
- 画像描写(3Dグラフィックスなど)を行う際に必要となる、計算処理を行う半導体チップ(プロセッサ)
- パソコンの頭脳はCPUだが、3Dグラフィックス描写に関する計算処理はGPUに任せられる。画像描写専門の頭脳
- CPUに比べ「単純かつ膨大な量のデータ」を短時間で処理することが得意
- CPUの数倍~100倍以上の計算速度を実現しうる。CPUより多くのコアを搭載している
## CPU Central Processing Unit 中央演算処理装置
Answers:
username_0: ## TPU Tensor Processing Unit ディープラーニング専用プロセッサ
- ディープラーニングのために開発したASIC(Application Specific Integrated Circuit: 特定用途向けIC)
- GPUにくらべ、消費電力あたりの性能が10倍
- TensorFlowに対応する。
- オープンソースのディープラーニングのフレームとして無料公開
- AI for everyone
username_1: タイムリーに情報をまとめていただき、いつもありがとうございます。 |
dart-lang/language | 787689452 | Title: Some improvements picked up from Kotlin language
Question:
username_0: I'm currently migrating a music app from native Android (using Kotlin) to Flutter, and here are a few improvements that could be done:
- using the `switch` command without `break`, as it is done in Kotlin with the `when` command. The reason why it's very useful is that the `break` could lead to bugs that are very hard to find if you forget it (happened to me a few times and I struggled a lot):
```
switch(myVariable)
1 -> doOneLineCommand()
2 -> {
doMultipleLineCommand()
doSomethingElseAndSoOn()
}
3, 4, 5 -> doSomethingForThoseMultipleCases()
default -> defaultCommand()
```
- using the `for` command as it is done in Kotlin. Again, repeating the variable name in the `for` command can lead to bugs if you copy/paste some code and you forget to rename the variable everywhere for example (happened again once):
```
for (a in 1..10) {...}
for (a in 1..10 step 2) {...}
for (a in start until end step 3) {...}
for (a in start downTo end step 4) {...}
// and so on...
```
- also, just like Kotlin, can we get rid once for all of the `;` in the end of each line? That would be great!
Thanks.
Answers:
username_1: The switch statement is very similar to rust's syntax, and I really like it.
Also, having a for loop that takes a range such as 1..10 is much cleaner than for (var i = 0; i < 10; i++) {}.
username_0: I was skeptical about the semicolons too when I first tried Kotlin, but then I realized how nice it was to not deal with semicolons anymore and how there are actually useless.
username_2: All these suggestions were debated for years.
On semicolons, please refer to https://github.com/dart-lang/language/blob/master/resources/optional-semicolons-prototype.md
See also an epic debate over the issue: https://github.com/dart-lang/language/issues/69
username_3: #1509
username_3: Duplicate #27 |
dotnet/machinelearning | 396564514 | Title: BinaryLoader created from IMultiStreamSource not Stream
Question:
username_0: Whenever a binary loader is created, it can be crated from either a path or a stream. It is inappropriate for the public API to intimate that it is possible for the binary loader to be creatable from a stream. Even though it can be, this is a historical accident as the format (and its reader) predates `IMultiStreamSource`. Rather it, like all readers of this shape, should take `IMultiStreamSource`. That *internally* it operates over a single stream is an implementation detail that should not be visible externally.<issue_closed>
Status: Issue closed |
randombit/botan | 102256620 | Title: RSA test Segmentation fault: 11 on OS X
Question:
username_0: * 2015-07-15 Travis OS X (shared lib): https://travis-ci.org/username_1/botan/jobs/70995185
* 2015-08-21 Travis OS X (static lib): https://travis-ci.org/username_1/botan/jobs/76545032
Answers:
username_1: Without a backtrace it is hard to proceed, and I don't have any OS X machines. But if someone can repro and get some basic information I'll check it out.
username_1: This crash hasn't been seen in many months so I'm going with either a compiler problem (given it was OS X only) or some bug of ours that has subsequently been fixed.
Status: Issue closed
|
duncanjbain/bugtracker | 912792054 | Title: Issue can't be created with pre-selected date, need to select a date for the form to get the value
Question:
username_0: **Describe the bug**
Bug can't be created with pre-selected date, need to select a date for the form to get the value
**To Reproduce**
Steps to reproduce the behavior:
1. Go to bug creation page
2. Leave date as default (current date)
3. Create bug
4. Fails due to not having date selected
**Expected behavior**
Form is submitted with default date value<issue_closed>
Status: Issue closed |
greiman/SdFat | 765625287 | Title: Add support for the Nano 33 IoT
Question:
username_0: My code which was working fine on version 1.1.4. Doesn't work anymore by using the newest version of the library (version 2.0.2). It does compile, but no file is created on the SD card. When I run an example sketch it is not detecting the SD card. But with the same setup, version 1.1.4 does work. Plz fix this
Answers:
username_1: The Nano 33 IoT has a SAMD21 processor. I tested SAMD21 by running the QuickStart example on a MRKZERO board and it works.
I don't have a Nano IoT so I need more info.
Have you tried the QuickStart example?
Which example did you run?
Did you edit the example?
What was the output?
What king of SD module are you using?
Which pins are connected to the module?
username_2: Actually I am facing similar issues, also with an Arduino Nano 33 IOT.
But I suspect it has nothing to do with the Chip itself.
Below is I think the important parts of the code.
Quickstart is working fine. So I am quite sure everything is connected correctly.
Delays are just to be sure it has nothing to do with timing stuff.
SD Card: Intenso Micro SDHC 8GB
SD Shield: AZDelivery 3 x Set SPI Reader Micro Speicher SD TF Karte Memory Card Shield
```
//in header
#define CHIP_SELECT 10
// SD_FAT_TYPE = 0 for SdFat/File as defined in SdFatConfig.h,
// 1 for FAT16/FAT32, 2 for exFAT, 3 for FAT16/FAT32 and exFAT.
#define SD_FAT_TYPE 1
#define DISABLE_CHIP_SELECT -1
#define SPI_SPEED SD_SCK_MHZ(4) //Had to reduce the speed to 4 because it would not work above
/* --------------- HeaderFunctions ---------------------------- */
bool initializeSdCard();
bool loadConfiguration(const char *filename, TableConfig &config);
bool saveConfiguration(const char *filename, const TableConfig &config);
void printFileToSerial(const char *filename);
//in cpp
bool initializeSdCard() {
// Initialize the SD.
if (!SD.begin(CHIP_SELECT, SPI_SPEED)) {
SD.initErrorPrint(&Serial);
Serial.println("Failed to initialize SD card");
return false;
} else {
return true;
Serial.println("SD card successfully initialized.");
}
}
// Loads the configuration from a file
bool loadConfiguration(const char *filename, TableConfig &config) {
// Open file for reading
delay(1000);
File file = SD.open(filename);
if (!file) {
file.close();
Serial.println("Cant load the file");
return false;
}
// Parse the JSON object in the file
bool success = deserializeConfig(file, config);
// This may fail if the JSON is invalid
if (!success) {
file.close();
Serial.println(F("Failed to deserialize configuration"));
return false;
}
[Truncated]
```
This is I think the important part of the code ...
So the problem is, the following: The first time I run the code a file is created. I can see it when I plug the SD card into my computer. The json is save fine.
But the second the time the file is not recognized correctly.
So the output is:
```
Cant load the file
Using default config
/config.json
No file on SD-Card. No deletion necessary.
/config.json
Failed to open config file for write
Failed to read file
Restart to load the config
```
How to debug this further?
username_1: I now have an Nano 33 IoT. I have tested many of the examples and it works with my SD socket and SD cards.
I use a simple microSD socket with no level shifter since level shifters are not needed for 3.3V chips. Sometimes level shifters distort signals for 3.3V chips like SAMD.
I tend to use SanDisk or Samsung SDs since some other brands work fine for the 4-bit SDIO bus used in phones, PC/Mac and other devices but have problems with the SPI protocol. You could try another SD card to see if behavior changes.
To debug I suggest you check for an SPI/hardware error whenever a call fails like this:
if (SD.sdErrorCode()) {
SD.errorPrint("message about error");
}
If there is no SPI/hardware error you can enable debug messages and find where the error occurs.
Edit SdFat/src/common/DebugMacros.h at about line 28 and set USE_DBG_MACROS non-zero.
username_2: Ok, I bought a Sandisk ... Same behavior.
First question: When should I use SD_FAT_TYPE 0 and when SD_FAT_TYPE 1.
My trial and error tells me ArduinoJson only is satisfied with the normal File, so Type 0.
I read the documentation, but I am not really getting it. Can this have sth. to do with the issues?
I tried both, but I am not succeeding so far.
I also tried the debug mode and thats the output:
DBG_FAIL: ExFatPartition.cpp.302
SD card successfully initialized.
DBG_FAIL: FatFileLFN.cpp.423
Couldn't open the file to read
Using default config
DBG_FAIL: FatFileLFN.cpp.423
Not necesssary to delete file
DBG_FAIL: FatFileLFN.cpp.423
Failed to open config file for write
DBG_FAIL: FatFileLFN.cpp.423
Couldn't open the file to read
Restart to load the config
So it looks like it immediately fails trying to open the volume.
I pretty sure you might have another clue for me now? At least I hope so.
I tried to debug it further in the library code, but didn't get that far.
Seems like its doing something in ExFat? Which is strange because my SD Card is formatted as FAT32.
username_1: The failure with exFAT is a check for an exFAT volume. You are using FAT and that succeeds.
If you only want FAT32/FAT16 support, edit SdFatConfig.h at about line 81 and set SDFAT_FILE_TYPE to 1 for ARM boards.
Your open for write fails here in SdFat:
```
create:
// don't create unless O_CREAT and write mode
if (!(oflag & O_CREAT) || !isWriteMode(oflag)) {
DBG_FAIL_MACRO;
goto fail;
}
```
This seems impossible since you use open(filename, FILE_WRITE).
Try running this sketch. It works on my Nano 33 IOT.
```
#include "SdFat.h"
#define CHIP_SELECT 10
#define SPI_SPEED SD_SCK_MHZ(4)
#define TEST_FILENAME "IOT33.txt"
SdFat SD;
void openTest() {
File file = SD.open(TEST_FILENAME, FILE_WRITE);
if (!file) {
Serial.println("file open failed");
} else {
Serial.println("file opened");
if (!file.remove()) {
Serial.println("remove failed");
} else {
Serial.println("file removed");
}
}
}
void setup() {
Serial.begin(9600);
while (!Serial) {}
Serial.println("Type any character to test open");
while (!Serial.available()) {}
Serial.println("start test");
if (!SD.begin(CHIP_SELECT, SPI_SPEED)) {
Serial.println("SD.begin failed");
}
openTest();
}
void loop() {}
```
Here is the output:
```
Type any character to test open
start test
file opened
file removed
```
If it works on your Nano 33 IOT then something you are using in your sketch must conflict with the definition of FILE_WRITE.
username_2: ```
load TablePartConfiguration
load Effect
load TablePartConfiguration
load Effect
DBG_FAIL: FsCache.cpp.41
DBG_FAIL: FatFile.cpp.795
DBG_FAIL: FatFile.cpp.870
DBG_FAIL: FatFileLFN.cpp.332
Failed to create config file
DBG_FAIL: FsCache.cpp.41
DBG_FAIL: FatFile.cpp.795
DBG_FAIL: FatFile.cpp.870
DBG_FAIL: FatFileLFN.cpp.332
```
Would this indicate some cache overflow, or Ram overflow?
Maybe my file I am trying to load is to big?
also can happen when printing the file:
```
{
"name": "test",
"id": 0,
"status": 1,
"configurationPerState": {
"1": {
"1": {
"maximumBrightness": 0
},
"2": {
"maximumBrightness": 0
},
"3": {
"maximumBrightness": 0
},
"4": {
"maximumBrightness": 0
}
},
"2": {
"1": {
"maximumBrightness": 0
},
"2": {
"maximumBrightness": 0
},
"3": {
"maximumBrightness": 0
},
"4"DBG_FAIL: FsCache.cpp.41
DBG_FAIL: FatFile.cpp.795
⸮DBG_FAIL: FsCache.cpp.41
DBG_FAIL: FatFile.cpp.795
⸮DBG_FAIL: FsCache.cpp.41
.... //never ends, is stuck here
```
username_2: Further investigating revealed that this part is getting it fail:
Removing the print will let it run through without problems.
Looks like this isn't really efficient then 😅
```
void printFile(const char *filename) {
// Open file for reading
File file = SD.open(filename, FILE_READ);
if (!file) {
//File file;
//if (!file.open(filename, FILE_READ)) {
Serial.println("Couldn't open the file to read");
return;
}
else {
Serial.println("Successfully opened the file for reading");
}
// Extract each characters by one by one
for (int i = 0; i < 200; i++) {
Serial.print((char)file.read());
}
Serial.println();
// Close the file
file.close();
}
```
username_2: Ok ..... forget what I was saying. It is working now. I had to reduce the speed even more to 1MHz. Now it is working.
Seems like it was somewhat on the limit and therefore it was crashing for bigger files.
Thanks for helping and maybe this is somewhat useful for others, so this is my working code:
```
#include "SdFat.h"
#define CHIP_SELECT 10
#define SPI_SPEED SD_SCK_MHZ(1)
#define TEST_FILENAME "config.json"
#include <ArduinoJson.h>
SdFat SD;
struct Battery {
int cycleCount;
byte level;
byte health;
int maxVoltage;
int designCapacity;
int fullChargeCapacity;
void load(JsonObjectConst);
void save(JsonObject) const;
};
struct TableConfig {
char name[20];
int id;
int status;
Battery battery;
void load(JsonObjectConst);
void save(JsonObject) const;
};
TableConfig config;
bool serializeConfig(const TableConfig &config, Print &dst);
bool deserializeConfig(Stream &src, TableConfig &config);
void Battery::save(JsonObject obj) const {
obj["cycle_count"] = cycleCount;
obj["level"] = level;
obj["health"] = health;
obj["max_voltage"] = maxVoltage;
obj["design_capacity"] = designCapacity;
obj["full_charge_capacity"] = fullChargeCapacity;
}
void Battery::load(JsonObjectConst obj) {
Serial.println(F("load Battery"));
cycleCount = obj["cycle_count"] | 0;
level = obj["level"] | 0;
health = obj["health"] | 0;
maxVoltage = obj["maxVoltage"] | 0;
designCapacity = obj["designCapacity"] | 0;
fullChargeCapacity = obj["fullChargeCapacity"] | 0;
}
[Truncated]
//else {
//Serial.println(F("Loaded file successfully"));
//}
// Save configuration
saveFile(TEST_FILENAME, config);
// Dump config file
printFile(TEST_FILENAME);
if (!loaded)
Serial.println(F("Restart to load the config"));
else
Serial.println(F("Done!"));
}
void loop() {
}
```
username_2: That's what I am getting with 8MHz:
```
write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
418.41,9540,1211,1221
419.18,9644,1211,1219
Starting read test, please wait.
read speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
403.71,1273,1261,1265
403.75,1273,1261,1265
Done
```
username_1: I have no idea why your app fails in various ways. Except for the Arduino SAMD SPI driver there is no SAMD dependent code.
username_2: Would you be so kind to run the code I posted above?
[https://github.com/username_1/SdFat/issues/228#issuecomment-834663225](https://github.com/username_1/SdFat/issues/228#issuecomment-834663225)
It should run out of the box. You just need the ArduinoJson lib.
If this is working for you, it should be an hardware issue on my side.
Really appreciate your will to help me!
username_2: Actually this not longer seems to be necessary. I changed all my wires, just to be sure and also I made them shorter .... aaaaaand now its working, even with 8MHz. I don't know why it didn't before. But I don't really care at that point, because that's not my major issue 😄.
Thanks for your support! This is really not be expected for such a free library.
username_2: Still can't get it to work reliable:
Reading works when a file is on the SD card, but as soon as I want to write it fails.
Also SD.errorPrint or SD.sdErrorCode() lets the Arduino hang.
```
// Saves the configuration to a file
bool SdCardHandler::writeToFile(const char *filePath,
const TableConfig &config) {
if (!SdCardHandler::initialized) {
return false;
}
if (SD.exists(filePath)) {
DEBUG_LN(F("Removing file before writing config"));
SD.remove(filePath);
} else {
DEBUG_LN(F("Not necesssary to delete file"));
}
File file = SD.open(filePath, O_WRONLY);
if (!file) {
SD.errorPrint(&Serial);
DEBUG_LN(F("Failed to open file to write config"));
/*if (SD.sdErrorCode()) {
SD.errorPrint("message about error");
}*/
return false;
}
// Serialize JSON to file
bool success = config.serializeConfig(file);
if (!success) {
DEBUG_LN(F("Failed to serialize configuration"));
file.close();
return false;
} else {
DEBUG_LN(F("Successfully serialize configuration"));
}
// Close the file
file.close();
return true;
}
```
Output Ruth Debug mode and trying to get the error message:
```
DBG_FAIL: ExFatPartition.cpp.302
SD card successfully initialized.
DBG_FAIL: FatFileLFN.cpp.423
File (config) not found
DBG_FAIL: FatFileLFN.cpp.423
File not found. Could not print.
Not necesssary to delete file
..... hanging
```
Output when *not* trying to get the error message:
```
DBG_FAIL: ExFatPartition.cpp.302 --> we already clarified that above
SD card successfully initialized.
DBG_FAIL: FatFileLFN.cpp.423. --> This makes sense right, because I want to read here not write
File (config) not found
DBG_FAIL: FatFileLFN.cpp.423. --> Same
File not found. Could not print.
Not necesssary to delete file. --> Because not existing
Failed to open file to write config. --> No error from debug from sd library
...
```
username_1: Can't help since my Nano 33 IoT works with my examples and test programs.
username_2: I do not think that it has sth. to do with the fact, that I am using an Arduino Nano. That's not the problem.
Something of my code around the library is messing up with it. When I use the clean example everything works, also writing. As soon as I want to use the code within my application it fails. I'm going to figure what's causing it ...
username_2: Ok, so I found the error:
I have this class where I declared the SD as a private member and the functions:
```
class SdCardHandler {
private:
SdFat SD;
// static initialized varibale
static bool initialized;
bool SdCardHandler::initializeSdCard() {
// Initialize the SD.
if (!SD.begin(CHIP_SELECT, SPI_SPEED)) {
SD.initErrorPrint(&Serial);
DEBUG_LN(F("Failed to initialize SD card."));
return false;
} else {
DEBUG_LN(F("SD card successfully initialized."));
return true;
}
}
bool loadFromFile(const char *filePath, TableConfig &config) {
File file = SD.open(filePath, FILE_READ);
//...
};
bool loadFromFile(char *filePath, String output){
File file = SD.open(filePath, FILE_WRITE);
//...
};
```
I think that the variable SD was not available anymore when the object of the class was destroyed and therefore it failed.
When declaring SdFat SD; as a global in the cpp file it works ...
username_1: SD has the cache and all characteristics of the volume on the card. File structures have a pointer to the instance of SdFat with the file.
So the SdFat instance used to open a file must be in scope to access the file |
angular/material | 320606775 | Title: build(gulp-sass): update to new version
Question:
username_0: <!--
Filling out this template is required! Do not delete it when submitting your issue! Without this information, your issue may be auto-closed.
Please submit questions to the [AngularJS Material Forum](https://groups.google.com/forum/#!forum/ngmaterial) instead of submitting an issue.
-->
## Bug, enhancement request, or proposal:
Proposal
### What is the expected behavior?
The project's dependencies are verified as secure.
### What is the current behavior?
There is a warning due to an out of date sub-dependency of `gulp-sass` and `node-sass`. **This only effects the library's build tooling and not the deployment assets or any application built with the library**.
### What is the use-case or motivation for changing an existing behavior?
Compliance.
### Which versions of AngularJS, Material, OS, and browsers are affected?
- AngularJS Material: 1.1.9 and prior
### Is there anything else we should know? Stack Traces, Screenshots, etc.
N/A
<!-- Please double check that you have provided the required reproduction steps and a Demo via CodePen, Plunker, or GitHub repo. -->
Answers:
username_0: Still blocked by node-sass's v5 release which has pushed from 2-3 weeks to 4-5+ with no real sign that it is going to happen soon.
username_0: node-sass released v4.9.1 to fix this issue earlier today: https://github.com/sass/node-sass/issues/2355#issuecomment-402674792
username_0: It looks like we're still pulling in vulnerable versions of `hoek` via `[email protected]` and `[email protected]`.
username_0: I haven't yet tracked down the `karma` related issues, but https://github.com/nodejs/node-gyp/pull/1471 is another `Moderate` level issue with `hoek` that should get resolved "soon". It looks like the PR is ready to be merged in `node-gyp`.
Status: Issue closed
|
sdnhm-vertnet/herps | 779999445 | Title: Monthly VertNet data use report for 2020-12, resource sdnhm_herps
Question:
username_0: Your monthly VertNet data use report is ready!
You can see the HTML rendered version of this report at:
http://tools-usagestats.vertnet-portal.appspot.com/reports/84b4d6e4-f762-11e1-a439-00145eb45e9a/202012/
Raw text and JSON-formatted versions of the report are also available for
download from this link.
A copy of the text version has also been uploaded to your GitHub
repository under the "reports" folder at:
https://github.com/sdnhm-vertnet/herps/tree/master/reports
A full list of all available reports can be accessed from:
http://tools-usagestats.vertnet-portal.appspot.com/reports/84b4d6e4-f762-11e1-a439-00145eb45e9a/
You can find more information on the reporting system, along with an
explanation of each metric, at:
http://www.vertnet.org/resources/usagereportingguide.html
Please post any comments or questions to:
http://www.vertnet.org/feedback/contact.html
Thank you for being a part of VertNet. |
umijs/umi | 376649684 | Title: including react-mapbox-gl in the project prevents umi global style
Question:
username_0: Trying to integrate mapbox to an UMI project caused some serious issue: global styles are no longer being loaded. It is even verifiable in the ant-design-pro demo.
To replicate the issue, first verify the global is just fine, set on it something like:
`html,
body,
#root {
height: 100%;
color: red
}`
then, after doing `yarn start`, red fonts are visible in the top of the page:
<img width="1679" alt="screen shot 2018-11-01 at 7 46 38 pm" src="https://user-images.githubusercontent.com/11444758/47891073-e81e2c00-de0e-11e8-983f-cfd77a165be1.png">
Now, install react-mapbox-gl:
`yarn add mapbox-gl react-mapbox-gl`
now just run the project
`yarn start`
The global style is gone and there is a padding in the page body:
<img width="1680" alt="screen shot 2018-11-01 at 7 49 22 pm" src="https://user-images.githubusercontent.com/11444758/47891183-5c58cf80-de0f-11e8-8c65-4c4273270a7e.png">
Now if you remove just react-mapbox-gl:
`yarn remove react-mapbox-gl`
Everything works just as expected.
Why is that happening even without using anything from the new packages?
Answers:
username_1: Please give a simple and reproducible repo, I tried it and can't reproduce your problem.
[username_1-Vyj0P2.zip](https://github.com/umijs/umi/files/2541346/username_1-Vyj0P2.zip)
username_0: Hi! Thank you for the quick response. I created my project using `yarn create umi`. You can check the issue here: https://github.com/username_0/umi-issue.
As you can see, global styles are not being used. But if you remove the mapbox dependencies or build the project, then globals are back
username_1: In case of dll related problems, please `exclude` the problematic package.
Status: Issue closed
|
snowplow/snowplow | 164703725 | Title: Document how to load Redshift via SSL
Question:
username_0: Currently this is missing from the documentation.
We should update [this page](https://github.com/snowplow/snowplow/wiki/Common-configuration) with the different valid configuration options for `ssl_mode`. The legal values are:
* `disabled` (default)
* `require`
* `verify-ca` or
* `verify-full`
We should similarly update https://github.com/snowplow/snowplow/wiki/Setting-up-Redshift#ssl
Answers:
username_1: @username_0 - I updated the [Common Configuration](https://github.com/snowplow/snowplow/wiki/Common-configuration) article. However, to update the [Setting up Redshift](https://github.com/snowplow/snowplow/wiki/Setting-up-Redshift#ssl), I need a real example of (access to) SSL enabled cluster.
username_0: Thanks Ihor!
username_2: Not sure I understand what you need @username_1 for the second part?
username_1: @username_2 - that article is all about screenshots. If that is what expected, I need to produce a screenshot of some client. If that is not required and a few sentences will do then I will completed the task.
username_2: A few sentences is fine...
username_1: done
username_2: Great, closing!
Status: Issue closed
|
Neovici/cosmoz-i18next | 601027180 | Title: There is no way of passing `interpolation` to the i18n.t call
Question:
username_0: All calls are passed through `argumentsToObject` which transforms any calls to {0: val, 1: val, ...} objects, with no way of passing in additional options like `interpolation`.
Ideally it would merge Object parameters to the final args object.
Proposed solution:
```js
_('User visited {0}', '<a href="https://google.com">Google</a>', {interpolation: {escapeValue: false}})
```
The final args would be:
```js
{
0: '<a href="https://google.com">Google</a>',
interpolation: {escapeValue: false}
}
```
Possible breaking changes of proposed solution:
* interpolation of Object parameters would change from current usage<issue_closed>
Status: Issue closed |
pop-os/pop | 853142531 | Title: Problem with bluetooth
Question:
username_0: My Pop os Bluetooth button is not working.
I have tried "rfkill" to unblock Bluetooth. Everything is unblocked when checked using "rfkill list".
Every time I have to use Bluetooth I have to use the command "sudo systemctl restart bluetooth".
Please help me with this problem.
Thank You
Answers:
username_1: @username_0: We'll need more info, reply with the output from `cat /etc/os-release` and if possible give us the make and model of you computer.
Also what Bluetooth button are you referring too? Could you supply screen shots or a more detailed description of what you are attempting to make work before using "sudo systemctl restart bluetooth"?
username_2: Isn't this a duplicate of https://github.com/pop-os/pop/issues/1623?
username_0: Yes...we both are facing the same issue.
Status: Issue closed
username_2: Let's not open multiple issues about the same bug then. We can work on a solution in the the other issue. |
jondauz/geocontent | 101965401 | Title: Define Ajax timeout value
Question:
username_0: Might want to add a timeout value to the ajax calls on lines 19 and 29. (Or a default timeout in ajaxSetup). I dont think jQuery ajax has this set by default.
https://api.jquery.com/jQuery.ajax/
https://api.jquery.com/jQuery.ajaxSetup/
Status: Issue closed
Answers:
username_1: Timeout has been added to ajaxSetup. |
beeware/briefcase | 600162288 | Title: Call "briefcase create" during CI/CD workflow in Windows virtual machine
Question:
username_0: I'm in some problem calling "briefcase create" in my CI/CD process.
In my "publish" job I want to publish the "*.msi" file of my project. In order to do that I need to call briefcase create+pacakge in Windows virtual machine.
the problem is that "briefcase create" requires Wix Toolset in windows, which I don't know how to install in the command line of my CI/CD workflow.
Do you have an idea of how to approach that? Is it even possible to download Wix Toolset from the command line? Any help would be appreciated.
It is worth mentioning that I'm using the CircleCI CI/CD platform.
Answers:
username_1: Unfortunately, I can't say I know.
You can definitely *download* WiX from the command line (you can point curl/wget/any other downloader at the public download URL). Installing it is another matter entirely. As I understand it, MSI installers are *supposed* to be scriptable - but I have no idea how that is acheived. Someone with Windows sysadmin experience might be able to help.
Another option *might* be to install the app on a physical machine, then zip up what was installed, and unpack *that* into your CI environment. I don't *think* WiX does anything with the registry during install that is critical - you only need access to the executables, and the WIX_HOME environment variable set.
If you find an answer, I'd definitely be interested in hearing about it, though. I'd like to be able to document how to do automated releases using CI platforms.
username_2: Hey @username_0 ! Cool that you're setting this up. I've done some Windows stuff and thought I'd chime in, in the hopes of being useful.
If I'm looking at the right page, https://github.com/wixtoolset/wix3/releases/tag/wix3112rtm shows they distribute some ZIP files as well as an EXE.
It looks to me that if you download `wix311-binaries.zip` and unzip them in your CircleCI configuration, and if you can put the binaries in a place briefcase will find them, then that'd be what you want.
Let me know what you think. Happy to help more if desired, or hop on a call/video chat if you want another pair of eyes.
username_0: Hi there @username_2 ! Thanks for the input.
I did some research and found out [here](https://chocolatey.org/packages/wixtoolset) that Wix Toolset can be installed using the following command:
```
choco install wixtoolset
```
I haven't tried yet using it in my CI/CD workflow, but I'll update once I did.
The only thing that needs to be considered is that this command should be running as administrator, so I need to check how to do that in CircleCI
Your idea sounds like it should work, but I don't feel comfortable saving an *.exe file as part of my repo unless if I have too. It feels like a patch. If the `choco` stuff won't work, I'll consider your suggestion again :)
In any case, I'll keep you updated!
username_2: What I intended to say was that your CircleCI config could include something like this PowerShell fragment:
```
Invoke-WebRequest -Uri https://github.com/wixtoolset/wix3/releases/download/wix3112rtm/wix311-binaries.zip -OutFile wix.zip
unzip wix.zip
```
i.e., you wouldn't save Wix code into your repo, but you'd download it and unzip it in your CircleCI config.
Your way sounds solid, too. Looking forward to hearing what you find!
username_0: Hey, @username_1 asked me to share with you my user story, so here it is:
Our story
I'm a physics student at Tel Aviv University and an algorithms engineer at Salesforce. During my study at the university, I came across the need to fit experimental data to theoretical formulas. The university provided us with a very primitive Matlab code that can do this, but this code has been written 20 years ago… I found it very insufficient for my needs and my fellow students’ needs.
So I wrote a new python library called [Eddington](https://github.com/EddLabs/) that does exactly that. It wraps up Scipy, Matplotlib, and NumPy in a very easy and straight forward way and can help students fit their data very easily to the theoretical models.
After I did that, I pitched the library to the professor responsible for TAU laboratories in order to give permission for other students to use my library. I thought that most students like me need something better to work with than what they already have. After I talked to her I realized that there are 3 types of students that may use my platform:
1. Researchers that know python and would want to use Eddington as a library for their experiments analysis code.
2. Students that are familiar with programming and CLIs but not necessarily know python. They will probably feel comfortable using Eddington as a CLI.
3. Students who have no programming knowledge and need a GUI in order to operate the library.
While the first 2 were easy to implement, the third one was tricky. I don’t know how to program in javascript and felt that trying to call Matplotlib through javascript may cause some unnecessary errors. I wanted to use a GUI library which is fully written in python, a programming language that I know very well, and could be published to all known operating systems such as Windows, Mac, Linux, etc. This is why I chose “BeeWare” ad my GUI platform.
At the moment the Eddington platform is not documented at all, but it works just fine! After the COVID-19 pandemic will pass we will start beta-testing with few students at TAU, and after that, we will publish an official release of the project with full documentation.
At the meanwhile, you can take a look at [here](https://github.com/EddLabs/eddington_gui) to see how Toga and Briefcase are being used as a GUI platform in my project. Maybe you can give some feedback on how to use it better. Pay attention that I’m developing on Windows, therefore I’m not using the full power of Toga because not all features are yet available for Windows. I’m trying to keep contributing to the BeeWare project what I need along with my work on Eddington.
username_1: @username_0 That's awesome - thanks for sharing!
For the record, it sounds like you're *exactly* the type of user that BeeWare is pitched at. My own background is in Physics (although I'm *long* out of practice :-) ) so I'm well aware of the extent to which people in the sciences are smart and capable, and often know Python. And if they don't, they're entirely capable of learning it - and they're often quite motivated to do so because of the wealth of numerical processing and other data analysis tooling that exists.
People with a science background also part of a "long tail" of app developers - people who have extremely specific application needs that aren't *inherently* complicated, but that will *never* be satisfied well by a general purpose tool. That makes a graphics toolkit that makes it easy to develop GUIs a valuable resource.
In terms of what you could be "doing better" - I've only taken a quick look, but nothing obvious stands out. Plus, you've got this working on Windows, which is the backend that has seen the least ongoing maintenance - so that's more than a little impressive :-) If you're comfortable having Eddington on the Briefcase & Toga success stories page, I'd love to add it.
One thing I would suggest as a potential larger project: Toga *has* a canvas widget that is implemented on macOS and GTK - and we've also had [a contribution that allows that canvas to be used as a matplotlib rendering backend](https://github.com/beeware/toga_chart). That charting widget hasn't been looked at in a while, and the canvas obviously isn't implemented in Windows at this time - but... if someone wanted to look into that... :-)
I know that might *sound* like a big project, but I believe it should be a lot less scary than it sounds. The GTK implementation of canvas is 160 lines of code. There's a base problem of picking Winforms widgets to implement the Toga API, but it looks like Winforms has a couple of obvious candidates (`System.Windows.Controls.Canvas` and `System.Drawing.Graphics` being the two I can see). After that, it's mostly a matter of working out "how do I spell 'draw a red line from A to B' in Winforms".
username_0: Hey. Thanks for all the compliments! This is really exciting :)
If you wish, you can surely add Eddington to Briefcase & Toga success stories page (even though I don't see it as a success story yet since I'm the only user of it at the moment. maybe in a few months).
As for Canvas implementation in Windows, please open an issue for it and assign it to me. I'll give it a go. If I could make Matplotlib work with a canvas widget, that could be awesome!
Thank's for all the support. I hope to continue contributing to your project, and myself in the process. My CI/CD workflow should be completed soon and I'll post here a link so you could take a look at it.
username_0: So I promised you I let you know when I have a working CI/CD process. So I have one!
Unfortunately, CircleCI does not support guest view mode so you cannot see the actual job in action, but if you take a look [here](https://github.com/EddLabs/eddington_gui/blob/dev/.circleci/config.yml) and see the config file defining the jobs.
As you could see the "publish" job creates and deploys the *msi* file as part of the [release](https://github.com/EddLabs/eddington_gui/releases).
Thank you for your help! Keep the good work. I will try to contribute my part as much as I can :)
username_1: Awesome! Thanks for sharing.
At this point, it would appear that there's nothing directly actionable on the part of Briefcase as a project. Your config demonstrates CI/CD is possible; anything beyond that is going to be a function of the exact CI/CD system in use.
On that basis, I'm going to close this ticket; if you think there are other features that should be added to support CI/CD configurations, please open a new ticket.
Status: Issue closed
|
Subsets and Splits