question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm looking for an elegant way to define an api that can consume JSON data as well as form data. The following snippet works, but it's not elegant and requires all kind of ugly code in the backend. Is there a better way to define this?
What works right now:
paths:
/pets:
post:
consumes:
- application/x-www-form-urlencoded
- application/json
parameters:
- name: nameFormData
in: formData
description: Updated name of the pet
required: false
type: string
- name: nameJSON
in: body
description: Updated name of the pet
required: false
type: string
Basic idea of how I'd like it to work:
paths:
/pets:
post:
consumes:
- application/x-www-form-urlencoded
- application/json
parameters:
- name: name
in:
- formData
- body
description: Updated name of the pet
required: true
type: string
But this doesn't work because the in value must be a string, not an array.
Any good ideas?
| OpenAPI 2.0
In OpenAPI 2.0, there's no way to describe that. Form and body parameters are mutually exclusive, so an operation can have either form data OR JSON body but not both. A possible workaround is to have two separate endpoints - one for form data and another one for JSON - if that is acceptable in your scenario.
OpenAPI 3.x
Your scenario can be described using OpenAPI 3.x. The requestBody.content.<media-type> keyword is used to define various media types accepted by the operation, such as application/json and application/x-www-form-urlencoded, and their schemas. Media types can have the same schema or different schemas.
openapi: 3.0.0
...
paths:
/pets:
post:
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Pet'
application/x-www-form-urlencoded:
schema:
$ref: '#/components/schemas/Pet'
responses:
'200':
description: OK
components:
schemas:
Pet:
type: object
properties:
name:
type: string
description: Updated name of the pet
required:
- name
Further info:
OAS3: Describing Request Body
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md#support-for-x-www-form-urlencoded-request-bodies
https://blog.readme.io/an-example-filled-guide-to-swagger-3-2/#requestformat
| OpenAPI | 42,287,298 | 10 |
I would like to describe the XML response payload of a RESTful interface with OpenAPI 2.0 (Swagger 2.0). However, I struggle describing a particular XML tag in the OpenAPI data model.
I can't get Swagger UI to create an appropriate example XML tag in this form, with an attribute and content between the opening and closing XML tags:
<Person id="bar">foo</Person>
The documentation (here) only describes how to model a tag with sub tags (type: object) or a tag with content (type: string), but not both at the same time.
I tried this, which the Swagger Editor accepts without any errors or warnings:
definitions:
Person:
type: string
example: foo
properties:
id:
type: string
example: bar
xml:
attribute: true
but it will be rendered by Swagger UI to the following example:
<Person id="bar"></Person>
As you can see, no "foo" content in there.
| Unfortunately there's no way to represent that using the OpenAPI Specification 2.0, 3.0, or 3.1
This issue is being tracked here and could be addressed in future versions of the specification.
| OpenAPI | 42,023,864 | 10 |
Learning about REST APIs and am following https://apihandyman.io/writing-openapi-swagger-specification-tutorial-part-2-the-basics/.
The API can receive two parameters: username and bla, but only username is required by using the required keyword. This makes sense to me.
The API will return firstname, lastname, and username, but only username is required by using the required keyword. This does not make sense to me. Does the lack of the required keyword indicate that the other two might sometimes not be required? What influences whether they are or are not?
paths:
/persons/{username}:
get:
summary: Gets a person
description: Returns a single person for its username.
parameters:
- name: username
in: path
required: true
description: The person's username
type: string
- name: bla
in: query
description: bla bla bla
type: string
responses:
200:
description: A Person
schema:
required:
- username
properties:
firstName:
type: string
lastName:
type: string
username:
type: string
404:
description: The Person does not exists.
| Your interpretation is correct. If a property of a response object is listed in the required property list, is must be present in the response object for it to be valid, quite similar to the required field in a parameter object. Whether a non-required property is included in the response or not is up to the business logic of your application to decide.
Some more information with pointers to the relevant parts of the specification below:
The semantics of the required property list of a response object is defined as part of the Schema Object section of the OpenAPI specification. There it says that the schema object "is based on the JSON Schema Specification Draft 4 and uses a predefined subset of it".
In the corresponding section on the required validation keyword of the JSON Schema Validation specification its semantics is defined as follows:
5.4.3. required
5.4.3.1. Valid values
The value of this keyword MUST be an array. This array MUST have at
least one element. Elements of this array MUST be strings, and MUST be
unique.
5.4.3.2. Conditions for successful validation
An object instance is valid against this keyword if its property set
contains all elements in this keyword's array value.
You'll find further examples of how the required keyword can be used in the examples section of the JSON Schema specification or Part 5, Section 2.2 of the tutorial you're following.
| OpenAPI | 39,581,039 | 10 |
How are Packer and Docker different? Which one is easier/quickest to provision/maintain and why? What is the pros and cons of having a dockerfile?
| Docker is a system for building, distributing and running OCI images as containers. Containers can be run on Linux and Windows.
Packer is an automated build system to manage the creation of images for containers and virtual machines. It outputs an image that you can then take and run on the platform you require.
For v1.8 this includes - Alicloud ECS, Amazon EC2, Azure, CloudStack, DigitalOcean, Docker, Google Cloud, Hetzner, Hyper-V, Libvirt, LXC, LXD, 1&1, OpenStack, Oracle OCI, Parallels, ProfitBricks, Proxmox, QEMU, Scaleway, Triton, Vagrant, VirtualBox, VMware, Vultr
Docker's Dockerfile
Docker uses a Dockerfile to manage builds which has a specific set of instructions and rules about how you build a container.
Images are built in layers. Each FROM RUN ADD COPY commands modify the layers included in an OCI image. These layers can be cached which helps speed up builds. Each layer can also be addressed individually which helps with disk usage and download usage when multiple images share layers.
Dockerfiles have a bit of a learning curve, It's best to look at some of the official Docker images for practices to follow.
Packer's Docker builder
Packer does not require a Dockerfile to build a container image. The docker plugin has a HCL or JSON config file which start the image build from a specified base image (like FROM).
Packer then allows you to run standard system config tools called "Provisioners" on top of that image. Tools like Ansible, Chef, Salt, shell scripts etc.
This image will then be exported as a single layer, so you lose the layer caching/addressing benefits compared to a Dockerfile build.
Packer allows some modifications to the build container environment, like running as --privileged or mounting a volume at build time, that Docker builds will not allow.
Times you might want to use Packer are if you want to build images for multiple platforms and use the same setup. It also makes it easy to use existing build scripts if there is a provisioner for it.
| Packer | 47,169,353 | 76 |
I am sitting with a situation where I need to provision EC2 instances with some packages on startup. There are a couple of (enterprise/corporate) constraints that exist:
I need to provision on top of a specific AMI, which adds enterprisey stuff such as LDAP/AD access and so on
These changes are intended to be used for all internal development machines
Because of mainly the second constraint, I was wondering where is the best place to place the provisioning. This is what I've come up with
Provision in Terraform
As it states, I simply provision in terraform for the necessary instances. If I package these resources into modules, then provisioning won't "leak out". The disadvantages
I won't be able to add a different set of provisioning steps on top of the module?
A change in the provisioning will probably result in instances being destroyed on apply?
Provisioning takes a long time because of the packages it tries to install
Provisioning in Packer
This is based on the assumption that Packer allows you to provision on top of AMIs so that AMIs can be "extended". Also, this will only be used in AWS so it won't use other builders necessarily. Provisioning in Packer makes the Terraform Code much simpler and terraform applies will become faster because it's just an AMI that you fire up.
For me both of these methods have their place. But what I really want to know is when do you choose Packer Provisioning over Terraform Provisioning?
| Using Packer to create finished (or very nearly finished) images drastically shortens the time it takes to deploy new instances and also allows you to use autoscaling groups.
If you have Terraform run a provisioner such as Chef or Ansible on every EC2 instance creation you add a chunk of time for the provisioner to run at the time you need to deploy new instances. In my opinion it's much better to do the configuration up front and ahead of time using Packer to bake as much as possible into the AMI and then use user data scripts/tools like Consul-Template to provide environment specific differences.
Packer certainly can build on top of images and in fact requires a source_ami to be specified. I'd strongly recommend tagging your AMIs in a way that allows you to use source_ami_filter in Packer and Terraform's aws_ami data source so when you make changes to your AMIs Packer and Terraform will automatically pull those in to be built on top of or deployed at the next opportunity.
I personally bake a reasonably lightweight "Base" AMI that does some basic hardening and sets up monitoring and logging that I want for all instances that are deployed and also makes sure that Packer encrypts the root volume of the AMI. All other images are then built off the latest "Base" AMI and don't have to worry about making sure those things are installed/configured or worry about encrypting the root volume.
By baking your configuration into the AMI you are also able to move towards the immutable infrastructure model which has some major benefits in that you know that you can always throw away an instance that is having issues and very quickly replace it with a new one. Depending on your maturity level you could even remove access to the instances so that it's no longer possible to change anything on the instance once it has been deployed which, in my experience, is a major factor in operational issues.
Very occasionally you might come across something that makes it very difficult to bake an AMI for and in those cases you might choose to run your provisioning scripts in a Terraform provisioner when it is being created. Sometimes it's simply easier to move an existing process over to using provisioners with Terraform than baking the AMIs but I would push to move things over to Packer where possible.
| Packer | 49,314,752 | 28 |
I'm using packer with ansible provisioner to build an ami, and terraform to setup the infrastructure with that ami as a source - somewhat similar to this article: http://www.paulstack.co.uk/blog/2016/01/02/building-an-elasticsearch-cluster-in-aws-with-packer-and-terraform
When command packer build pack.json completes successfully I get the output ami id in this format:
eu-central-1: ami-12345678
In my terraform variables variables.tf I need to specify the source ami id, region etc. The problem here is that I don't want to specify them manually or multiple times. For region (that I know beforehand) it's easy since I can use environment variables in both situations, but what about the output ami? Is there a built-in way to chain these products or some not so hacky approach to do it?
EDIT: Hacky approach for anyone who might be interested. In this solution I'm greping the aws region & ami from packer output and use a regular expression in perl to write the result into a terraform.tfvars file:
vars=$(pwd)"/terraform.tfvars"
packer build pack.json | \
tee /dev/tty | \
grep -E -o '\w{2}-\w+-\w{1}: ami-\w+' | \
perl -ne '@parts = split /[:,\s]+/, $_; print "aws_amis." . $parts[0] ." = \"" . $parts[1] . "\"\n"' > ${vars}
| You should consider using Terraform's Data Source for aws_ami. With this, you can rely on custom tags that you set on the AMI when it is created (for example a version number or timestamp). Then, in the Terraform configuration, you can simply filter the available AMIs for this account and region to get the AMI ID that you need.
https://www.terraform.io/docs/providers/aws/d/ami.html
data "aws_ami" "nat_ami" {
most_recent = true
executable_users = ["self"]
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["amzn-ami-vpc-nat*"]
}
name_regex = "^myami-\\d{3}"
owners = ["self"]
}
NOTE: in the example above (from the docs), the combination of filters is probably excessive. You can probably get by just fine with something like:
data "aws_ami" "image" {
most_recent = true
owners = ["self"]
filter {
name = "tag:Application"
values = ["my-app-name"]
}
}
output "ami_id" {
value = "${data.aws_ami.image.id}"
}
An additional benefit of this is that you can deploy to multiple regions with the same configuration and no variable map!
| Packer | 37,357,618 | 23 |
I am struggling to pass input parameter to packer provisioning script. I have tried various options but no joy.
Objective is my provision.sh should accept input parameter which I send during packer build.
packer build -var role=abc test.json
I am able to get the user variable in json file however I am unable to pass it provision script. I have to make a decision based on the input parameter.
I tried something like
"provisioners":
{
"type": "shell",
"scripts": [
"provision.sh {{user `role`}}"
]
}
But packer validation itself is failed with no such file/directory error message.
It would be real help if someone can help me on this.
Thanks in advance.
| You should use the environment_vars option, see the docs Shell Provisioner - environment_vars.
Example:
"provisioners": [
{
"type": "shell"
"environment_vars": [
"HOSTNAME={{user `vm_name`}}",
"FOO=bar"
],
"scripts": [
"provision.sh"
],
}
]
| Packer | 47,596,369 | 22 |
I would like to build a Docker image without docker itself. I have looked at [Packer](http://www.packer.io/docs/builders/docker.html, but it requires that Docker be installed on the builder host.
I have looked at the Docker Registry API documentation but this information doesn't appear to be there.
I guess that the image is simply a tarball, but I would like to see a complete specification of the format, i.e. what exact format is required and whether there are any metadata files required. I could attempt downloading an image from the registry and look what's inside, but there is no information on how to fetch the image itself.
The idea of my project is to implement a script that creates an image from artifacts I have compiled, and uploads it to the registry. I would like to use OpenEmbedded for this purpose, essentially this would be an extension to Bitbake.
| The Docker image format is specified here: https://github.com/docker/docker/blob/master/image/spec/v1.md
The simplest possible image is a tar file containing the following:
repositories
uniqid/VERSION
uniqid/json
uniqid/layer.tar
Where VERSION contains 1.0, layer.tar contains the chroot contents and json/repositories are JSON files as specified in the spec above.
The resulting tar can be loaded into docker via docker load < image.tar
| Packer | 25,583,038 | 21 |
I've been looking at Packer.io, and would love to use it to provision/prepare the vagrant (VirtualBox) boxes used by our developers.
I know I could build the boxes with VirtualBox using the VirtualBox Packer builder, but find the layer stacking of Docker to provide a much faster development process of the boxes.
How do I produce the image with a Dockerfile and then export it as a Vagrant box?
| Find the size of the docker image from docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mybuntu 1.01 7c142857o35 2 weeks ago 1.94 GB
Run a container based on the image docker run mybuntu:1.01
Create a QEMU image from the container,
Also, use the size of the image in the first command (seek=IMAGE_SIZE).
And, for the docker export command retrieve the appropriate container id from docker ps -a
dd if=/dev/zero of=mybuntu.img bs=1 count=0 seek=2G
mkfs.ext2 -F mybuntu.img
sudo mount -o loop mybuntu.img /mnt
docker export <CONTAINER-ID> | sudo tar x -C /mnt
sudo umount /mnt
Use qemu-utils to convert to vmdk
sudo apt-get install qemu-utils
qemu-img convert -f raw -O vmdk mybuntu.img mybuntu.vmdk
More info on formats that are available for conversion can be found here.
Now you can import the vmdk file in virtualbox
| Packer | 23,436,613 | 20 |
I'm using AWS Cloudformation to setup numerous elements of network infrastructure (VPCs, SecurityGroups, Subnets, Autoscaling groups, etc) for my web application. I want the whole process to be automated. I want click a button and be able to fire up the whole thing.
I have successfully created a Cloudformation template that sets up all this network infrastructure. However the EC2 instances are currently launched without any needed software on them. Now I'm trying to figure out how best to get that software on them.
To do this, I'm creating AMIs using Packer.io. But some people have instead urged me to use Cloud-Init. What heuristic should I use to decide what to bake into the AMIs and/or what to configure via Cloud-Init?
For example, I want to preconfigure an EC2 instance to allow me (saqib) to login without a password from my own laptop. Thus the EC2 must have a user. That user must have a home directory. And in that home directory must live a file .ssh/known_hosts containing encrypted codes. Should I bake these directories into the AMI? Or should I use cloud-init to set them up? And how should I decide in this and other similar cases?
| I like to separate out machine provisioning from environment provisioning.
In general, I use the following as a guide:
Build Phase
Build a Base Machine Image with something like Packer, including all software required to run your application. Create an AMI out of this.
Install the application(s) onto the Base Machine Image creating an Application Image. Tag and version this artifact. Do not embed environment specific stuff here like database connections etc. as this precludes you from easily reusing this AMI across different environment runtimes.
Ensure all services are stopped
Release Phase
Spin up an environment consisting of the images and infra required, using something like CFN.
Use Cloud-Init user-data to configure the application environment (database connections, log forwarders etc.) and then start the applications/services
This approach gives the greatest flexibility and cleanly separates out the various concerns of a continuous delivery pipeline.
| Packer | 28,824,594 | 17 |
So I'm trying to use Packer to create an AWS image and specify some user data via user_data_file. The contents of this file needs to be run when the instance boots as it will be unique each time. I can't bake this into the AMI.
Using packer I have the following:
{
"variables": {
"ami_name": ""
},
"builders": [
{
"type": "amazon-ebs",
"region": "us-east-1",
"source_ami": "ami-c8580bdf",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "{{ user `ami_name` }}-{{ isotime | clean_ami_name }}",
"user_data_file": "user_data.sh",
"tags": {
"os_version": "ubuntu",
"built_by": "packer",
"build_on": "{{ isotime | clean_ami_name }}",
"Name": "{{ user `ami_name` }}"
}
}],
"provisioners": [
{
"type": "ansible",
"playbook_file": "playbook.yml",
"user": "ubuntu"
}]
}
The contents of my user_data shell script are just a few basic config lines for a package that was installed via the ansible scripts that were run in the provisioners step. Watching the output of Packer I can confirm that the ansible scripts all run.
Packer completes and creates the AMI, but the user data piece is never executed. No record of it exists in resulting image. There is no /userdata.log file and /var/lib/cloud/instance/user-data.txt is empty I feel like I missing something basic as this should be a very simple thing to do with Packer.
| Rereading this I think maybe you misunderstood how user-data scripts work with Packer.
user_data is provided when the EC2 instance is launched by Packer. This instance is in the end, after provisioning snapshoted and saved as an AMI.
When you launch new instances from the created AMI it doesn't have the same user-data, it gets the user-data you specify when launching this new instance.
The effect of the initial (defined in your template) user-data might or might not be present in the new instance depending if the change was persisted in the AMI.
| Packer | 45,110,795 | 14 |
I have a shell provisioner in packer connected to a box with user vagrant
{
"environment_vars": [
"HOME_DIR=/home/vagrant"
],
"expect_disconnect": true,
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
where the content of the script is:
whoami
sudo su
whoami
and the output strangely remains:
==> virtualbox-ovf: Provisioning with shell script: scripts/configureProxies.sh
virtualbox-ovf: vagrant
virtualbox-ovf: vagrant
why cant I switch to the root user?
How can I execute statements as root?
Note, I do not want to quote all statements like sudo "statement |foo" but rather globally switch user like demonstrated with sudo su
| You should override the execute_command. Example:
"provisioners": [
{
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E sh -eux '{{.Path}}'",
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
],
| Packer | 48,537,171 | 14 |
If I want to create a virtual machine image using Packer, one option is to download an operating system's ISO image and use that as the base for a custom setup. When doing this, one needs to provide the boot_command, which is an array of strings that tell Packer how to setup the operating system.
Now my question is: How do I find out the correct boot_command steps for a given operating system? Of course I might boot it up manually and write down every single thing I type, but I wonder if there is a more convenient way.
Of course I can also ask Google about it, but is there an "official" way? E.g., are the steps for Ubuntu documented somewhere in the Ubuntu documentation? Or is it actually trial and error, or at least peeking at somebody else's work?
| The boot_command depends on OS you want to install and are just the keystrokes that are needed to start an automatted installation.
For Ubuntu/Debian it is called preseeding, for Red Hat/CentOS/SLES there are kickstart files, and other Linux distributions probably have similar features.
For Ubuntu a starting point is the documentation of the Automatic Installation.
Packer normally uses the boot_command in conjunction with the http_directory directory. Ubuntu is booted from ISO, then Packer types in the keystrokes of boot_command and then serves a static HTTP download link with the preseed configuration to do the rest of the installation, eg. installing packages.
The boot_command contain kernel parameters, but can also be used using boot parameters to preseed questions.
| Packer | 31,370,750 | 12 |
From ubuntu shell I ran below command, to talk to aws platform, to customise amazon ami(ami-9abea4fb):
$ packer build -debug template.packer
Debug mode enabled. Builds will not be parallelized.
amazon-ebs output will be in this color.
==> amazon-ebs: Prevalidating AMI Name...
==> amazon-ebs: Pausing after run of step 'StepPreValidate'. Press enter to continue.
==> amazon-ebs: Inspecting the source AMI...
==> amazon-ebs: Pausing after run of step 'StepSourceAMIInfo'. Press enter to continue.
==> amazon-ebs: Creating temporary keypair: packer 5dfe9f3b-9cc2-cbfa-7349-5c8ef50c64d5
amazon-ebs: Saving key for debug purposes: ec2_amazon-ebs.pem
==> amazon-ebs: Pausing after run of step 'StepKeyPair'. Press enter to continue.
where template.packer is:
{
"builders": [
{
"type": "amazon-ebs",
"region": "us-west-2",
"source_ami": "ami-9abea4fb",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "MiddleTier-{{isotime | clean_ami_name}}",
"ami_description": "Amazon AMI customised",
"tags": {
"role": "MiddleTier"
},
"run_tags":{
"role": "buildSystem"
}
}
],
"provisioners": [
],
"post-processors":[
]
}
and my understanding is, AWS has created a private key(ec2_amazon-ebs.pem) for packer to talk to EC2 instance in passwordless way, as mentioned in above steps.
But I do not see packer copying the private key(ec2_amazon-ebs.pem) in my laptop(as ~/.ssh/ec2_amazon-ebs.pem)
How does packer talk to EC2? without copying as ~/.ssh/ec2_amazon-ebs.pem in my laptop
| Unless Packer is given a private SSH with the ssh_private_key_file Packer creates an ephemeral that is only kept in memory while Packer is running.
When you run with the -debug flag this ephemeral key is saved into the current working directory. This is to enable you to troubleshoot the build by manually SSH'ing into the instance.
| Packer | 59,440,394 | 11 |
I am extremely confused on why this occurs when using Packer. What I want to do is move a local redis.conf file over to my AMI; however, it gives me an error as I do this. My Packer provisioner is like so:
{
"type": "file",
"source": "../../path/to/file/redis.conf",
"destination": "/etc/redis/redis.conf"
}
And then it returns an error saying: scp /etc/redis/: Permission denied
| What you need to do is copy the file to a location where you have write access (/tmp for example) and then use an inline provisioner to move it somewhere else.
I have something like:
provisioner "file" {
source = "some/path/to/file.conf"
destination = "/tmp/file.conf
}
And then:
provisioner "shell" {
inline = ["sudo mv /tmp/file.conf /etc/some_service/file.conf"]
}
| Packer | 67,095,815 | 11 |
I would like to generate a public/private ssh key pair during packer provisioning and copy the files to the host machine. Is there a way to copy files out from the VM to the host using packer?
| I figured it out. The file provisioner has a "direction" option that allows download instead of upload
{
"type": "file",
"source": "app.tar.gz",
"destination": "/tmp/app.tar.gz",
"direction" : "download"
}
| Packer | 36,511,571 | 10 |
I have used hashicorp packer for building baked VM images.
But was wondering linuxkit too do the same stuff I mean building the baked VM images with the only difference of being more container and kernel centeric.
Want to know the exact difference between the working of these two and there use cases.
Also can there be any usecase using both packer and linuxkit.
| I have used both fairly extensively (disclosure: I am a volunteer maintainer for LinuxKit). I used packer for quite some time, and switched almost all of the work I did in packer over to LinuxKit (lkt).
In principle both are open-source tools that serve the same purpose: generate an OS image that can be run. Practically, most use it for VM images to run on vbox, AWS, Azure, GCR, etc., but you can generate an image that will run on bare metal, which I have done as well.
Packer, being older, has a more extensive array of provisioners, builders, plugins, etc. It tries to be fairly broad-based and non-opinionated. Build for everywhere, run any install you want.
LinuxKit runs almost everything - onboot processes and continuous services - in a container. Even the init phase - where the OS image will be booted - is configured by copying files from OCI images.
LinuxKit's strong opinions about how to run and build things can in some ways be restrictive, but also liberating.
The most important differences, in my opinion, are the following:
lkt builds up from scratch to the bare minimum you need; Packet builds from an existing OS base.
lkt's security surface of attack will be smaller, because it starts not with an existing OS, but with, well, nothing.
lkt images can be significantly smaller, because you add in only precisely what you need.
lkt builds run locally. Packer essentially spins up a VM (vbox, EC2, whatever), runs some base image, modifies it per your instructions, and then saves it as a new image. lkt just manipulates OCI images by downloading and copying files to create a new image.
I can get to the same net result for differences 1-3 with Packer and LinuxKit, albeit lkt is much less work. E.g. I contributed the getty package to LinuxKit to separate and control when/how getty is launched, and in which namespace. The amount of work to separate and control that in a packer image built on a full OS would have been much harder. Same for the tpm package. Etc.
The biggest difference IMO, though, is step 4. Because Packer launches a VM and runs commands in it, it is much slower and much harder to debug. The same packer image that takes me 10+ mins to build can be 30 seconds in lkt. Your mileage may vary, depending on if the OCI images are downloaded or not, and how complex what you are doing is, but it really has been an order of magnitude faster for me.
Similarly, debugging step by step, or finding an error, running, debugging, and rebuilding, is far harder in a process that runs in a remote VM than it is in a local command: lkt build.
As I said, opinions are my own, but those are the reasons that I moved almost all of my build work to lkt, contributed, and agreed to join the excellent group of maintainers when asked by the team.
At the same time, I am deeply appreciative to HashiCorp for their fantastic toolset. Packer served me well; nowadays, LinuxKit serves me better.
| Packer | 47,812,633 | 10 |
No matter what I do, I only ever get a 404 or Error: invalid reference format
I think it should be podman pull hub.docker.com/_/postgres
but this doesn't work. I've also tried
podman pull hub.docker.com/postgres
podman pull hub.docker.com/__/postgres
podman pull hub.docker.com/library/postgres
Any ideas what's needed here to grab any of the official images from Docker Hub?
| In order to pull images from Docker Hub using podman, the image name needs to be prefixed by the docker.io/ registry name.
To get the 'official images' they are part of the 'library' collection.
So to pull Postgres from Docker Hub using Podman, the command is
podman pull docker.io/library/postgres
| Podman | 69,162,077 | 31 |
I want to run podman as a container to run CI/CD pipelines. However, I keep getting this error from the podman container:
$ podman info
ERRO[0000] 'overlay' is not supported over overlayfs
Error: could not get runtime: 'overlay' is not supported over overlayfs: backing file system is unsupported for this graph driver
I am using the Jenkins Kubernetes plugin to write CI/CD pipelines that run as containers within a Kubernetes cluster. I've been successful at writing pipelines that use a Docker-in-Docker container to run docker build and docker push commands.
However, running a Docker client and a Docker Daemon inside a container makes the CI/CD environment very bloated, hard to configure, and just not ideal to work with. So I figured I could use podman to build Docker images from Dockerfiles without using a fat Docker daemon.
The problem is that podman is so new that I have not seen anyone attempt this before, nor I am enough of a podman expert to properly execute this.
So, using the podman installation instructions for Ubuntu I created the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update -qq \
&& apt-get install -qq -y software-properties-common uidmap \
&& add-apt-repository -y ppa:projectatomic/ppa \
&& apt-get update -qq \
&& apt-get -qq -y install podman
# To keep it running
CMD tail -f /dev/null
So I built the image and ran it as follows:
# Build
docker build -t podman:ubuntu-16.04 .
# Run
docker run --name podman -d podman:ubuntu-16.04
Then when running this command on the running container, I get an error:
$ docker exec -ti podman bash -c "podman info"
ERRO[0000] 'overlay' is not supported over overlayfs
Error: could not get runtime: 'overlay' is not supported over overlayfs: backing file system is unsupported for this graph driver
I install podman on an Ubuntu 16.04 machine I had and ran the same podman info command I got the expected results:
host:
BuildahVersion: 1.8-dev
Conmon:
package: 'conmon: /usr/libexec/crio/conmon'
path: /usr/libexec/crio/conmon
version: 'conmon version , commit: '
Distribution:
distribution: ubuntu
version: "16.04"
MemFree: 2275770368
MemTotal: 4142137344
OCIRuntime:
package: 'cri-o-runc: /usr/lib/cri-o-runc/sbin/runc'
path: /usr/lib/cri-o-runc/sbin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 2146758656
SwapTotal: 2146758656
arch: amd64
cpus: 2
hostname: jumpbox-4b3620b3
kernel: 4.4.0-141-generic
os: linux
rootless: false
uptime: 222h 46m 33.48s (Approximately 9.25 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 15
RunRoot: /var/run/containers/storage
VolumePath: /var/lib/containers/storage/volumes
Does anyone know how I can fix this error and get podman working from a container?
| Your Dockerfile should install iptables as well:
FROM ubuntu:16.04
RUN apt-get update -qq \
&& apt-get install -qq -y software-properties-common uidmap \
&& add-apt-repository -y ppa:projectatomic/ppa \
&& apt-get update -qq \
&& apt-get -qq -y install podman \
&& apt-get install -y iptables
# To keep it running
CMD tail -f /dev/null
Then run the command with:
docker run -ti --rm podman:test bash -c "podman --storage-driver=vfs info"
This should give you the response you expect.
| Podman | 56,032,747 | 30 |
I recently found out about Podman (https://podman.io). Having a way to use Linux fork processes instead of a Daemon and not having to run using root just got my attention.
But I'm very used to orchestrate the containers running on my machine (in production we use kubernetes) using docker-compose. And I truly like it.
So I'm trying to replace docker-compose. I will try to keep docker-compose and using podman as an alias to docker as Podman uses the same syntax as docker:
alias docker=podman
Will it work? Can you suggest any other tool? I really intend to keep my docker-compose.yml file, if possible.
| Yes, that is doable now, check podman-compose, this is one way of doing it, another way is to convert the docker-compose yaml file to a kubernetes deployment using Kompose. there is a blog post from Jérôme Petazzoni @jpetazzo: from docker-compose to kubernetes deployment
| Podman | 55,154,393 | 27 |
What I am trying to accomplish is to connect to a database installed on the host system. Now there is a similar question already for docker, but I could not get that to work with Podman, I imagine because networking works a bit differently here.
My solution so far has been to use --add-host=dbhost:$(ip route show dev cni-podman0 | cut -d\ -f7), but I am not certain that's a good idea and it's not going to work when a different network is used.
What is the best approach to accomplish this? Is there perhaps a default hostname for the container host already defined?
| You can also use host.containers.internal in podman. It's basically Podman's equivalent to host.docker.internal, but works out of the box.
| Podman | 58,678,983 | 24 |
When I do something like podman rmi d61259d8f7a7 -f it fails with a message: Error: unable to delete "vvvvvvvvvvvv" (cannot be forced) - image has dependent child images.
I already tried the all switch podman rmi --all which does delete some images but many are still left behind. How do I force remove all images and dependent child images in a single step?
| Inspired by a similar docker answer with a slight modification, adding the a was the missing magic in my case:
WARNING: This will delete every image! Please, check and double check that it is indeed what you need.
$ podman rmi $(podman images -qa) -f
Again, please use with caution and make sure you know what you're doing! Sharing here for my own future reference and hoping that it will save someone else some time.
Thanks to the hint by @Giuseppe Scrivano there's an alternative which may be more natural (previous warning applies):
podman system prune --all --force && podman rmi --all
See podman system prune --help for details. I have not yet had a chance to verify that this second method fixes the "image has dependent child images" error.
| Podman | 63,287,522 | 24 |
I'm trying to use Podman for local development. My idea is to use a local folder and sync it with the container where I'll be running my application.
I found that the -v option that I would use if I was working with Docker works with the server machine, as it says in the documentation -v Bind mount a volume into the container. Volume src will be on the server machine, not the client. Because of that, when I use that option the folder is not mounted and I can't find it when I access it using podman exec -it application bash
Is there a way I can sort this out?
I want to something equivalent to:
docker run -v localFolder:/remoteFolder application
where localFolder is a path in my local machine, that will be mapped on remoteFolder on the container
| podman machine stop podman-machine-default
podman machine rm podman-machine-default
podman machine init -v $HOME:$HOME
podman machine start
podman run -ti --rm -v $HOME:$HOME busybox
| Podman | 69,298,356 | 24 |
The Google Container Registry documentation provides very good help on authenticating to it with Docker. Is there a way to do the same with Podman? The Google doc mentions Access Token as a method. Maybe that could work. If anybody has any advice or experience of this, I'd really appreciate your help
| gcloud auth print-access-token | podman login -u oauth2accesstoken --password-stdin XX.gcr.io
xx.gcr.io is the host name. For example https://us.gcr.io, etc.
oauth2accesstoken is a special username that tells it to get all identity information from the token passed as a password.
See this doc.
| Podman | 63,790,529 | 16 |
I am trying to build a large image with Podman. The build fails with
Reached heap limit Allocation failed
error.
In docker I can avoid this by allocating more memory for docker engine in docker settings.
However, for Podman it doesn't seem to be so easy.
I tried to modify ~/.config/containers/podman/machine/qemu/podman-machine-default.json and increase the memory, then run
podman machine stop
podman machine start
with no luck to solve the problem.
Any ideas?
| The following commands are also very handy for quick resolution.
podman machine stop
podman machine set --cpus 2 --memory 2048
podman machine start
Source: https://github.com/containers/podman/issues/12713#issuecomment-1002567777
| Podman | 70,114,200 | 16 |
Is it possible to use Testcontainers with Podman in Java tests?
As of March 2022, the Testcontainers library doesn't detect an installed Podman as a valid Docker environment.
Can Podman be a Docker replacement on both MacOS with Apple silicon (local development environment) and Linux x86_64 (CI/CD environment)?
| It is possible to use Podman with Testcontainers in Java projects, that use Gradle on Linux and MacOS (both x86_64 and Apple silicon).
Prerequisites
Podman Machine and Remote Client are installed on MacOS - https://podman.io/getting-started/installation#macos
Podman is installed on Linux - https://podman.io/getting-started/installation#linux-distributions
Enable the Podman service
Testcontainers library communicates with Podman using socket file.
Linux
Start Podman service for a regular user (rootless) and make it listen to a socket:
systemctl --user enable --now podman.socket
Check the Podman service status:
systemctl --user status podman.socket
Check the socket file exists:
ls -la /run/user/$UID/podman/podman.sock
MacOS
Podman socket file /run/user/1000/podman/podman.sock can be found inside the Podman-managed Linux VM. A local socket on MacOS can be forwarded to a remote socket on Podman-managed VM using SSH tunneling.
The port of the Podman-managed VM can be found with the command podman system connection list --format=json.
Install jq to parse JSON:
brew install jq
Create a shell alias to forward the local socket /tmp/podman.sock to the remote socket /run/user/1000/podman/podman.sock:
echo "alias podman-sock=\"rm -f /tmp/podman.sock && ssh -i ~/.ssh/podman-machine-default -p \$(podman system connection list --format=json | jq '.[0].URI' | sed -E 's|.+://.+@.+:([[:digit:]]+)/.+|\1|') -L'/tmp/podman.sock:/run/user/1000/podman/podman.sock' -N core@localhost\"" >> ~/.zprofile
source ~/.zprofile
Open an SSH tunnel:
podman-sock
Make sure the SSH tunnel is open before executing tests using Testcontainers.
Configure Gradle build script
build.gradle
test {
OperatingSystem os = DefaultNativePlatform.currentOperatingSystem;
if (os.isLinux()) {
def uid = ["id", "-u"].execute().text.trim()
environment "DOCKER_HOST", "unix:///run/user/$uid/podman/podman.sock"
} else if (os.isMacOsX()) {
environment "DOCKER_HOST", "unix:///tmp/podman.sock"
}
environment "TESTCONTAINERS_RYUK_DISABLED", "true"
}
Set DOCKER_HOST environment variable to Podman socket file depending on the operating system.
Disable Ryuk with the environment variable TESTCONTAINERS_RYUK_DISABLED.
Moby Ryuk helps you to remove containers/networks/volumes/images by given filter after specified delay.
Ryuk is a technology for Docker and doesn't support Podman. See testcontainers/moby-ryuk#23
Testcontainers library uses Ruyk to remove containers. Instead of relying on Ryuk to implicitly remove containers, we will explicitly remove containers with a JVM shutdown hook:
Runtime.getRuntime().addShutdownHook(new Thread(container::stop));
Pass the environment variables
As an alternative to configuring Testcontainers in a Gradle build script, you can pass the environment variables to Gradle.
Linux
DOCKER_HOST="unix:///run/user/$UID/podman/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
MacOS
DOCKER_HOST="unix:///tmp/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
Full example
See the full example https://github.com/evgeniy-khist/podman-testcontainers
| Podman | 71,549,856 | 15 |
Is there a way to run Podman inside Podman, similar to the way you can run Docker inside Docker?
Here is a snippet of my Dockerfile which is strongly based on another question:
FROM debian:10.6
RUN apt update && apt upgrade -qqy && \
apt install -qqy iptables bridge-utils \
qemu-kvm libvirt-daemon libvirt-clients virtinst libvirt-daemon-system \
cpu-checker kmod && \
apt -qqy install curl sudo gnupg2 && \
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list && \
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/Release.key | sudo apt-key add - && \
apt update && \
apt -qqy install podman
Now trying some tests:
$ podman run -it my/test bash -c "podman --storage-driver=vfs info"
... (long output; this works fine)
$ podman run -it my/test bash -c "podman --storage-driver=vfs images"
ERRO[0000] unable to write system event: "write unixgram @000ec->/run/systemd/journal/socket: sendmsg: no such file or directory"
REPOSITORY TAG IMAGE ID CREATED SIZE
$ podman run -it my/test bash -c "podman --storage-driver=vfs run docker.io/library/hello-world"
ERRO[0000] unable to write system event: "write unixgram @000ef->/run/systemd/journal/socket: sendmsg: no such file or directory"
Trying to pull docker.io/library/hello-world...
Getting image source signatures
Copying blob 0e03bdcc26d7 done
Copying config bf756fb1ae done
Writing manifest to image destination
Storing signatures
ERRO[0003] unable to write pod event: "write unixgram @000ef->/run/systemd/journal/socket: sendmsg: no such file or directory"
ERRO[0003] Error preparing container 66692b7ff496775499d405d538769a078f2794549955cf2409fcbcbf87f42e94: error creating network namespace for container 66692b7ff496775499d405d538769a078f2794549955cf2409fcbcbf87f42e94: mount --make-rshared /var/run/netns failed: "operation not permitted"
Error: failed to mount shm tmpfs "/var/lib/containers/storage/vfs-containers/66692b7ff496775499d405d538769a078f2794549955cf2409fcbcbf87f42e94/userdata/shm": operation not permitted
I've also tried a suggestion from the other question, passing --cgroup-manager=cgroupfs, but without success:
$ podman run -it my/test bash -c "podman --storage-driver=vfs --cgroup-manager=cgroupfs run docker.io/library/hello-world"
Trying to pull docker.io/library/hello-world...
Getting image source signatures
Copying blob 0e03bdcc26d7 done
Copying config bf756fb1ae done
Writing manifest to image destination
Storing signatures
ERRO[0003] unable to write pod event: "write unixgram @000f3->/run/systemd/journal/socket: sendmsg: no such file or directory"
ERRO[0003] Error preparing container c3fff4d8161903aaebd6f89f3b3c06b55038e11e07b6b561dc6576ca675747a3: error creating network namespace for container c3fff4d8161903aaebd6f89f3b3c06b55038e11e07b6b561dc6576ca675747a3: mount --make-rshared /var/run/netns failed: "operation not permitted"
Error: failed to mount shm tmpfs "/var/lib/containers/storage/vfs-containers/c3fff4d8161903aaebd6f89f3b3c06b55038e11e07b6b561dc6576ca675747a3/userdata/shm": operation not permitted
Seems like some network configuration is needed. I found the project below which suggests that some tweaking on network configurations might be necessary, but I don't know what would be the context of that and whether it would apply here or not.
https://github.com/joshkunz/qemu-docker
EDIT: I've just discovered /var/run/podman.sock, but also without success:
$ sudo podman run -it -v /run/podman/podman.sock:/run/podman/podman.sock my/test bash -c "podman --storage-driver=vfs --cgroup-manager=cgroupfs run docker.io/library/hello-world"
Trying to pull my/test...
denied: requested access to the resource is denied
Trying to pull my:test...
unauthorized: access to the requested resource is not authorized
Error: unable to pull my/text: 2 errors occurred:
* Error initializing source docker://my/test: Error reading manifest latest in docker.io/my/test: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
* Error initializing source docker://quay.io/my/test:latest: Error reading manifest latest in quay.io/my/test: unauthorized: access to the requested resource is not authorized
Seems like root cannot see the images I've created under my user.
Any ideas? Thanks.
| Assume we would like to run ls / in a docker.io/library/alpine container.
Standard Podman
podman run --rm docker.io/library/alpine ls /
Podman in Podman
Let's run ls / in a docker.io/library/alpine container, but this time we run podman in a quay.io/podman/stable container.
Update June 2021
A GitHub issue comment shows an example of how to run Podman in Podman as a non-root user both on the host and in the outer container. Slightly modified it would look like this:
podman \
run \
--rm \
--security-opt label=disable \
--user podman \
quay.io/podman/stable \
podman \
run \
--rm \
docker.io/library/alpine \
ls /
Here is a full example:
$ podman --version
podman version 3.2.1
$ cat /etc/fedora-release
Fedora release 34 (Thirty Four)
$ uname -r
5.12.11-300.fc34.x86_64
$ podman \
run \
--rm \
--security-opt label=disable \
--user podman \
quay.io/podman/stable \
podman \
run \
--rm \
docker.io/library/alpine \
ls /
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b
Copying config sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83
Writing manifest to image destination
Storing signatures
bin
dev
etc
home
lib
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
$
To avoid repeatedly downloading the inner container image,
create a volume
podman volume create mystorage
and add the command-line option
-v mystorage:/home/podman/.local/share/containers:rw
to the outer Podman command. In other words
podman \
run \
-v mystorage:/home/podman/.local/share/containers:rw \
--rm \
--security-opt label=disable \
--user podman \
quay.io/podman/stable \
podman \
run \
--rm \
docker.io/library/alpine \
ls /
Podman in Podman (outdated answer)
(The old outdated answer from Dec 2020. I'll probably remove this when it's clear that the method described here is outdated)
Let's run ls / in a docker.io/library/alpine container, but this time we run podman in a quay.io/podman/stable container.
The command will look like this:
podman \
run \
--privileged \
--rm \
--ulimit host \
-v /dev/fuse:/dev/fuse:rw \
-v ./mycontainers:/var/lib/containers:rw \
quay.io/podman/stable \
podman \
run \
--rm \
--user 0 \
docker.io/library/alpine ls
(The directory ./mycontainers is here used for container storage)
Here is a full example
$ podman --version
podman version 2.1.1
$ mkdir mycontainers
$ podman run --privileged --rm --ulimit host -v /dev/fuse:/dev/fuse:rw -v ./mycontainers:/var/lib/containers:rw quay.io/podman/stable podman run --rm --user 0 docker.io/library/alpine ls | head -5
Trying to pull docker.io/library/alpine...
Getting image source signatures
Copying blob sha256:188c0c94c7c576fff0792aca7ec73d67a2f7f4cb3a6e53a84559337260b36964
Copying config sha256:d6e46aa2470df1d32034c6707c8041158b652f38d2a9ae3d7ad7e7532d22ebe0
Writing manifest to image destination
Storing signatures
bin
dev
etc
home
lib
$ podman run --privileged --rm --ulimit host -v /dev/fuse:/dev/fuse:rw -v ./mycontainers:/var/lib/containers:rw quay.io/podman/stable podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/library/alpine latest d6e46aa2470d 4 days ago 5.85 MB
If you would leave out -v ./mycontainers:/var/lib/containers:rw you might see the slightly confusing error message
Error: executable file `ls` not found in $PATH: No such file or directory: OCI runtime command not found error
References:
How to use Podman inside of a container Red Hat blog post from July 2021.
discussion.fedoraproject.org (discussion about not found in $PATH)
github comment (that gives advice about the correct way to run Podman in Podman)
| Podman | 64,509,618 | 14 |
I'm asking this question despite having read similar but not exactly what I want at C# naming convention for enum and matching property
I found I have a tendency to name enums in plural and then 'use' them as singular, example:
public enum EntityTypes {
Type1, Type2
}
public class SomeClass {
/*
some codes
*/
public EntityTypes EntityType {get; set;}
}
Of course it works and this is my style, but can anyone find potential problem with such convention? I do have an "ugly" naming with the word "Status" though:
public enum OrderStatuses {
Pending, Fulfilled, Error, Blah, Blah
}
public class SomeClass {
/*
some codes
*/
public OrderStatuses OrderStatus {get; set;}
}
Additional Info:
Maybe my question wasn't clear enough. I often have to think hard when naming the variables of the my defined enum types. I know the best practice, but it doesn't help to ease my job of naming those variables.
I can't possibly expose all my enum properties (say "Status") as "MyStatus".
My question: Can anyone find potential problem with my convention described above? It is NOT about best practice.
Question rephrase:
Well, I guess I should ask the question this way: Can someone come out a good generic way of naming the enum type such that when used, the naming of the enum 'instance' will be pretty straightforward?
| Microsoft recommends using singular for Enums unless the Enum represents bit fields (use the FlagsAttribute as well). See Enumeration Type Naming Conventions (a subset of Microsoft's Naming Guidelines).
To respond to your clarification, I see nothing wrong with either of the following:
public enum OrderStatus { Pending, Fulfilled, Error };
public class SomeClass {
public OrderStatus OrderStatus { get; set; }
}
or
public enum OrderStatus { Pending, Fulfilled, Error };
public class SomeClass {
public OrderStatus Status { get; set; }
}
| Plural | 1,405,851 | 361 |
I want to be able to translate pluralized strings in i18n in rails. A string can be :
You have 2 kids
or
You have 1 kid
I know that I can use pluralize helper method, but I want to embed this in i18n translations so that I don't have to mess up with my views at any point in the future. I read that :count is somehow used in translations for plural, but I can't find any real resources on how it gets implemented.
Notice that I know that I can pass a variable in a translation string. I also tried something like :
<%= t 'misc.kids', :kids_num => pluralize(1, 'kid') %>
Which works fine, but has a fundamental problem of the same idea. I need to specify the string 'kid' in the pluralize helper. I don't want to do that because it will lead to view problems in the future. Instead I want to keep everything in the translation and nothing in the view.
How can I do that ?
| Try this:
en.yml :
en:
misc:
kids:
zero: no kids
one: 1 kid
other: %{count} kids
In a view:
You have <%= t('misc.kids', :count => 4) %>
Updated answer for languages with multiple pluralization (tested with Rails 3.0.7):
File config/initializers/pluralization.rb:
require "i18n/backend/pluralization"
I18n::Backend::Simple.send(:include, I18n::Backend::Pluralization)
File config/locales/plurals.rb:
{:ru =>
{ :i18n =>
{ :plural =>
{ :keys => [:one, :few, :other],
:rule => lambda { |n|
if n == 1
:one
else
if [2, 3, 4].include?(n % 10) &&
![12, 13, 14].include?(n % 100) &&
![22, 23, 24].include?(n % 100)
:few
else
:other
end
end
}
}
}
}
}
#More rules in this file: https://github.com/svenfuchs/i18n/blob/master/test/test_data/locales/plurals.rb
#(copy the file into `config/locales`)
File config/locales/en.yml:
en:
kids:
zero: en_zero
one: en_one
other: en_other
File config/locales/ru.yml:
ru:
kids:
zero: ru_zero
one: ru_one
few: ru_few
other: ru_other
Test:
$ rails c
>> I18n.translate :kids, :count => 1
=> "en_one"
>> I18n.translate :kids, :count => 3
=> "en_other"
>> I18n.locale = :ru
=> :ru
>> I18n.translate :kids, :count => 1
=> "ru_one"
>> I18n.translate :kids, :count => 3
=> "ru_few" #works! yay!
>> I18n.translate :kids, :count => 5
=> "ru_other" #works! yay!
| Plural | 6,166,064 | 96 |
I am trying to use the getQuantityString method in Resources to retrieve quantity strings (plurals) based on Android Developer guidelines Quantity string (plurals)
The error I am getting is
Error:(604) Multiple substitutions specified in non-positional format; did you mean to add the formatted="false" attribute?
Error:(604) Found tag where is expected
when I set up plurals as below
<plurals name="productCount">
<item quantity="one" formatted="true">%1$d of %2$d product</item>
<item quantity="other" formatted="true">%1$d of %2$d products</item>
</plurals>
And trying to read it as below
productIndexCountText.setText(getResources().getQuantityString(R.plurals.productCount, position, size));
One workaround is to break the string up to use plural only for the last part of the string and concatenate the two parts. But I am trying to avoid doing that if possible.
| You don't need to set the "formatted" attribute for any of those items. When using quantity strings, there are only three possibilities:
the resource string is plain text and does not contain any parameters
the resource string contains only one parameter (most likely the quantity); use %d or whatever format you need
the resource string contains multiple parameters; all parameters have to be explicitly accessed by their position, for example %1$d
As for the getQuantityString method, there are two overloads: one with only the resource id and the quantity, and one with an additional Object... formatArgs parameter.
For case 1., you can use the getQuantityString(@PluralsRes int id, int quantity) method.
For all other cases, i. e. if you have any parameters, you need the getQuantityString(@PluralsRes int id, int quantity, Object... formatArgs) overload. Note: all parameters have to be present in the parameter array. That means, if the resource string displays the quantity, the quantity variable will be passed twice to the function.
That is because the quantity parameter of the method itself is not considered when resolving the positional parameters of your resource string.
So if these are your resources,
<resources>
<plurals name="test0">
<item quantity="one">Test ok</item>
<item quantity="other">Tests ok</item>
</plurals>
<plurals name="test1">
<item quantity="one">%d test ok</item>
<item quantity="other">%d tests ok</item>
</plurals>
<plurals name="test2">
<item quantity="one">%2$s: %1$d test ok</item>
<item quantity="other">%2$s: %1$d tests ok</item>
</plurals>
<plurals name="test3">
<item quantity="one">%3$s: %1$d test out of %2$d ok</item>
<item quantity="other">%3$s: %1$d tests out of %2$d ok</item>
</plurals>
</resources>
then the appropriate calls to getQuantityString are:
int success = 1;
int total = 10;
String group = "Group name";
getResources().getQuantityString(R.plurals.test0, success)
// Test ok
getResources().getQuantityString(R.plurals.test1, success, success)
// 1 test ok
getResources().getQuantityString(R.plurals.test2, success, success, group)
// Group name: 1 test ok
getResources().getQuantityString(R.plurals.test3, success, success, total, group)
// Group name: 1 test out of 10 ok
success = 5;
getResources().getQuantityString(R.plurals.test0, success)
// Tests ok
getResources().getQuantityString(R.plurals.test1, success, success)
// 5 tests ok
getResources().getQuantityString(R.plurals.test2, success, success, group)
// Group name: 5 tests ok
getResources().getQuantityString(R.plurals.test3, success, success, total, group)
// Group name: 5 tests out of 10 ok
Quantity classes: understanding the quantity parameter
As stated above, the key is to understand that the quantity parameter of getQuantityString is not used to replace the placeholders like %d or %1$d. Instead, it is used to determine the appropriate item from the plurals itself, in combination with the locale of the resource file.
Beware however that this is a less direct mapping than the attribute's name and its possible values (zero, one, two, few, many, other) might suggest. For example, providing an additional <item quantity="zero"> will not work (at least not in English), even if the value of the quantity parameter is 0.
The reason is that the way plurals work in Android is by the concept of quantity classes. A quantity class is a set of quantity values that have the same grammatical rules in a given language. This crucially means that
which quantity classes are used, and
which numeric values are mapped to them
is dependent on the locale the respective resource file is for.
It is important to understand that both questions are decided only by grammatical necessity. Here are some examples:
In Chinese or Korean, only other is used, because in these languages sentences don't grammatically differ based on the given quantity.
In English, there's two classes: one for the literal value 1, and other for all other values including 0.
In Irish, 1 is mapped to one, 2 is mapped to two, 3-6 is few, 7-10 is many, 0 and 11+ is other.
In Slovenian, the value 1 and all values ending in 01 are mapped to one (1, 101, 3001, ...). 2 and values ending in 02 are mapped to two (2, 302, 1002, ...). 3, 4 and values ending in 03 or 04 are mapped to few (3, 4, 6004, ...). Anything else is other (0, 11, 48, 312, ...).
In Polish, 5-19 and values ending in 05-19 are mapped to many (5, 12, 216, 4711, ...). Values ending in 2, 3 or 4 including 2-4 themselves are mapped to few (3, 42, 103, 12035374, ...). This respects however that 12, 13 and 14 are exceptions from this rule because they are mapped to many. (Side note: yes, grammatically speaking, 5 is many while 12035374 is few.)
Armenian is like English, with the exception that the value 0 is also mapped to one, because that's how their grammar works. You can see from this example that the quantity class one doesn't even necessarily represent just one-ish numbers.
As you can see, it can get fairly complicated to determine the correct quantity class. That's why getQuantityString already does that for you, based on the quantity parameter and the resource file's locale. The rules Android (mostly) plays by are defined in the Language Plural Rules of the Unicode Common Locale Data Repository. That is also where the names of the quantity classes come from.
All that means that the set of quantity classes needed to translate any quantity string can differ from language to language (Chinese just needs other, English needs one and other, Irish needs all but zero, etc.). Within one language however, all plurals should each have the same number of items covering all quantity classes necessary for that particular language.
Conclusion
A call to getQuantityString can be understood like this:
int success = 5;
int total = 10;
String group = "Group name";
getResources().getQuantityString(R.plurals.test3, success, success, total, group)
// \_____________/ \_____/ \___________________/
// | | |
// id: used to get the plurals resource | |
// quantity: used to determine the appropriate quantity class |
// formatArgs: used to positionally replace the placeholders %1, %2 and %3
The quantity parameter's value of "5" will mean the used item will be the one with the quantity class other from Chinese, Korean, English, Slovenian and Armenian resource files, few for Irish, and many for Polish.
There are two special cases I'd also briefly mention:
Non-integer quantities
Basically, the chosen class depends on language-specific rules again. It is neither universal how a class is chosen, nor guaranteed that any class required to cover all rules for integers is also used for any non-integers. Here are a few examples:
For English, any value with decimals will always map to other.
For Slovenian, any value with decimals will always map to few.
For Irish, the choice depends on the integer part.
For Polish, in contrast to the complex rules for integers, non-integers are always mapped to other like in English.
Note: This is how it should be according to the Language Plural Rules. Alas, Android has no readily available method for float or double at the moment.
Multiple quantities in one string
If your display text has multiple quantities, e. g. %d match(es) found in %d file(s)., split it into three separate resources:
%d match(es) (plurals item)
%d file(s) (plurals item)
%1$s found in %2$s. (ordinary parameterized strings item)
You can then make the appropriate calls to getQuantityString for 1 and 2, and then another one to getString for the third, with the first two readily localized strings as formatArgs.
The reason is to allow translators to switch the parameter order in the third resource, should the language require it. E.g., if the only valid syntax in a hypothetical language was In %d file(s) it found %d match(es)., the translator could translate the plurals as usual, and then translate the third resource as In %2$s it found %1$s. to account for the swapped order.
| Plural | 41,950,952 | 84 |
I use plurals to compile a quantity string for an Android application. I follow exactly what one can find in the tutorials:
res.getQuantityString(
R.plurals.number_of_comments, commentsCount, commentsCount);
Here is the definition of the plurals:
<?xml version="1.0" encoding="utf-8"?>
<resources>
<plurals name="number_of_comments">
<item quantity="zero">No comments</item>
<item quantity="one">One comment</item>
<item quantity="other">%d comments</item>
</plurals>
</resources>
Interesting enough, the output string is odd to what I definied:
commentsCount = 0 => "0 comments"
commentsCount = 1 => "One comment"
commentsCount = 2 => "2 comments"
I guess this is because the docs state When the language requires special treatment of the number 0 (as in Arabic). for zero quantity. Is there any way to force my definition?
| According to the documentation :
The selection of which string to use is made solely based on
grammatical necessity. In English, a string for zero will be ignored
even if the quantity is 0, because 0 isn't grammatically different
from 2, or any other number except 1 ("zero books", "one book", "two
books", and so on).
If you still want to use a custom string for zero, you can load a different string when the quantity is zero :
if (commentsCount == 0)
str = res.getString(R.string.number_of_comments_zero);
else
str = res.getQuantityString(R.plurals.number_of_comments, commentsCount, commentsCount);
| Plural | 17,261,290 | 39 |
I was going to use Java's standard i18n system with the ChoiceFormat class for plurals, but then realized that it doesn't handle the complex plural rules of some languages (e.g. Polish). If it only handles languages that resemble English, then it seems a little pointless.
What options are there to achieve correct plural forms? What are the pros and cons of using them?
| Well, you already tagged the question correctly, so I assume you know thing or two about ICU.
With ICU you have two choices for proper handling of plural forms:
PluralRules, which gives you the rules for given Locale
PluralFormat, which uses aforementioned rules to allow formatting
Which one to use? Personally, I prefer to use PluralRules directly, to select appropriate message from the resource bundles.
ULocale uLocale = ULocale.forLanguageTag("pl-PL");
ResourceBundle resources = ResourceBundle.getBundle( "path.to.messages",
uLocale.toLocale());
PluralRules pluralRules = PluralRules.forLocale(uLocale);
double[] numbers = { 0, 1, 1.5, 2, 2.5, 3, 4, 5, 5.5, 11, 12, 23 };
for (double number : numbers) {
String resourceKey = "some.message.plural_form." + pluralRules.select(number);
String message = "!" + resourceKey + "!";
try {
message = resources.getString(resourceKey);
System.out.println(format(message, uLocale, number));
} catch (MissingResourceException e) { // Log this }
}
Of course you (or the translator) would need to add the proper forms to properties file, in this example let's say:
some.message.plural_form.one=Znaleziono {0} plik
some.message.plural_form.few=Znaleziono {0} pliki
some.message.plural_form.many=Znaleziono {0} plików
some.message.plural_form.other=Znaleziono {0} pliku
For other languages (i.e. Arabic) you might also need to use "zero" and "two" keywords, see CLDR's language plural rules for details.
Alternatively you can use PluralFormat to select valid form. Usual examples show direct instantiation, which totally doesn't make sense in my opinion. It is easier to use it with ICU's MessageFormat:
String pattern = "Znaleziono {0,plural,one{# plik}" +
"few{# pliki}" +
"many{# plików}" +
"other{# pliku}}";
MessageFormat fmt = new MessageFormat(pattern, ULocale.forLanguageTag("pl-PL"));
StringBuffer result = new StringBuffer();
FieldPosition zero = new FieldPosition(0);
double[] theNumber = { number };
fmt.format(theNumber, result, zero);
Of course, realistically you would not hardcode th pattern string, but place something like this in the properties file:
some.message.pattern=Found {0,plural,one{# file}other{# files}}
The only problem with this approach is, the translator must be aware of the placeholder format. Another issue, which I tried to show in the code above is, MessageFormat's static format() method (the one that is easy to use) always formats for the default Locale. This might be a real problem in web applications, where the default Locale typically means the server's one. Thus I had to format for a specific Locale (floating point numbers, mind you) and the code looks rather ugly...
I still prefer the PluralRules approach, which to me is much cleaner (although it needs to use the same message formatting style, only wrapped with helper method).
| Plural | 14,326,653 | 38 |
I'm looking for a function that given a string it switches the string to singular/plural. I need it to work for european languages other than English.
Are there any functions that can do the trick? (Given a string to convert and the language?)
Thanks
| Here is my handy function:
function plural( $amount, $singular = '', $plural = 's' ) {
if ( $amount === 1 ) {
return $singular;
}
return $plural;
}
By default, it just adds the 's' after the string. For example:
echo $posts . ' post' . plural( $posts );
This will echo '0 posts', '1 post', '2 posts', '3 posts', etc. But you can also do:
echo $replies . ' repl' . plural( $replies, 'y', 'ies' );
Which will echo '0 replies', '1 reply', '2 replies', '3 replies', etc. Or alternatively:
echo $replies . ' ' . plural( $replies, 'reply', 'replies' );
And it works for some other languages too. For example, in Spanish I do:
echo $comentarios . ' comentario' . plural( $comentarios );
Which will echo '0 comentarios', '1 comentario', '2 comentarios', '3 comentarios', etc. Or if adding an 's' is not the way, then:
echo $canciones . ' canci' . plural( $canciones, 'ón', 'ones' );
Which will echo '0 canciones', '1 canción', '2 canciones', '3 canciones', etc.
| Plural | 4,728,933 | 27 |
In Android strings, you can define plurals to handle translations depending on the actual number supplied to the string as described here.
Strings also allow for specifying multiple positional parameters similar to what sprintf does in many languages.
However, consider the following string:
<resources>
<string name="remaining">%1$d hours and %2$d minutes remaining.</string>
</resources>
It contains two numbers, how would I transform this to a plural in Android? All examples always work with a single parameter only. Is this even possible?
| Previous answer uses string concatenation which is incorrect from an i18n point of view. For the original string "%1$d hours and %2$d minutes remaining." using string concatenation would force the translation of "remaining" to the end which mightn't be appropriate for some languages.
My solution would be:
<resources>
<string name="remaining">Time remaining: Hours:%1$d Minutes:%2$d.</string>
</resources>
Or perhaps with "Time remaining" as the heading.
This solution is mentioned in http://developer.android.com/guide/topics/resources/string-resource.html#FormattingAndStyling
It's often possible to avoid quantity strings by using
quantity-neutral formulations such as "Books: 1"
| Plural | 34,393,271 | 19 |
I have txt files that look like this:
word, 23
Words, 2
test, 1
tests, 4
And I want them to look like this:
word, 23
word, 2
test, 1
test, 4
I want to be able to take a txt file in Python and convert plural words to singular. Here's my code:
import nltk
f = raw_input("Please enter a filename: ")
def openfile(f):
with open(f,'r') as a:
a = a.read()
a = a.lower()
return a
def stem(a):
p = nltk.PorterStemmer()
[p.stem(word) for word in a]
return a
def returnfile(f, a):
with open(f,'w') as d:
d = d.write(a)
#d.close()
print openfile(f)
print stem(openfile(f))
print returnfile(f, stem(openfile(f)))
I have also tried these 2 definitions instead of the stem definition:
def singular(a):
for line in a:
line = line[0]
line = str(line)
stemmer = nltk.PorterStemmer()
line = stemmer.stem(line)
return line
def stem(a):
for word in a:
for suffix in ['s']:
if word.endswith(suffix):
return word[:-len(suffix)]
return word
Afterwards I'd like to take duplicate words (e.g. test and test) and merge them by adding up the numbers next to them. For example:
word, 25
test, 5
I'm not sure how to do that. A solution would be nice but not necessary.
| If you have complex words to singularize, I don't advise you to use stemming but a proper python package link pattern :
from pattern.text.en import singularize
plurals = ['caresses', 'flies', 'dies', 'mules', 'geese', 'mice', 'bars', 'foos',
'families', 'dogs', 'child', 'wolves']
singles = [singularize(plural) for plural in plurals]
print(singles)
returns:
>>> ['caress', 'fly', 'dy', 'mule', 'goose', 'mouse', 'bar', 'foo', 'foo', 'family', 'family', 'dog', 'dog', 'child', 'wolf']
It's not perfect but it's the best I found. 96% based on the docs : http://www.clips.ua.ac.be/pages/pattern-en#pluralization
| Plural | 31,387,905 | 14 |
Background
I work on an app that has many translations inside it.
I have the next English plural strings:
<plurals name="something">
<item quantity="one">added photo</item>
<item quantity="other">added %d photos</item>
</plurals>
and the French translation:
<plurals name="something">
<item quantity="one">a ajouté une photo</item>
<item quantity="other">a ajouté %d photos</item>
</plurals>
The problem
For both the French and Russian, I get the next warning:
The quantity 'one' matches more than one specific number in this
locale, but the message did not include a formatting argument (such as
%d). This is usually an internationalization error. See full issue
explanation for more.
when choosing to show details , it says:
Thins is, I don't get what should be done to fix it, and if there is even a problem...
The question
What exactly should I do with those strings? What should I tell the translators?
| I'm going to write an answer since this is quite an difficult explanation.
In various languages, nouns use the singular form if they end with 1. Refer to: http://www.unicode.org/cldr/charts/latest/supplemental/language_plural_rules.html
Explaining in English there are languages where it's correct to say "added 1 photo" as well as "added 101 photo". Notice the "photo". So this means that you should always add "%d" on one strings as well. Android will choose the best scenario to use. Which means that in English it will choose "other" for numbers > 1 and on other languages it will choose "one" for numbers ended in one.
Resuming, add %d to your one string and should be fine. Also make sure your translators respect the plural rules for their language.
| Plural | 26,159,556 | 13 |
This is about best practices in general, not specific for a single language, database or whatever
We all have to deal with generated output where you can be reporting "one products" or "two product". Doesn't read very well... Some just solve this by using "one product(s)" or "number of products: (1)" and others might have other solutions.
Things could be even more complex in different spoken languages! In French, when you have zero products, you would use the singular form, not the plural form! (Zero product) Other languages (Chinese, Japanese) might even lack these grammatical differences or have more than two different words to indicate something about the number of products. (A plural and a greater plural, for example.)
But to keep this simple, let's focus on the languages that have both singular and plural words.
When setting up a new project, which also has to generate reports, how do you deal with singular and plural words? Do you add two name fields in your database for singular and plural form? Do you add additional rules in the code to transform words from singular to plural? Do you use other tricks?
When working on a project that needs to track singular and plural forms, how do you deal with this?
| I'd recommend taking a look at gettext in general and ngettext in particular. Maybe even if you're not going to translate your application. Just head to this part of the documentation. It has implementation for more or less all languages and even if your language of choice lacks this support, nothing stops you from borrowing the ideas.
| Plural | 1,438,093 | 12 |
Android allows translators to define Plurals. The following example works for me with locale 'en':
<plurals name="numberOfSongsAvailable">
<item quantity="one">One song found.</item>
<item quantity="other">%d songs found.</item>
</plurals>
But adding a special value for two does not work, still the other version is taken. Is the usage of two dependent upon the locale? So does Android only take the two version if the locale explicitly specifies that there should be a two version?
The SO Question Android plurals treatment of “zero” spots the same mistake when using zero in English which is also not supported. There are no solutions in this question except to avoid Android plurals which I want to avoid.
| Android is using the CLDR plurals system, and this is just not how it works (so don't expect this to change).
The system is described here:
http://cldr.unicode.org/index/cldr-spec/plural-rules
In short, it's important to understand that "one" does not mean the number 1. Instead these keywords are categories, and the specific numbers n that belong to each category are defined by rules in the CLDR database:
http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/language_plural_rules.html
While there appears to be no language which uses "zero" for anything other than 0, there are languages which assign 0 to "one". There are certainly plenty of cases where "two" contains other numbers than just 2.
If Android where to allow you to do what you intended, your applications could not be properly translated into any number of languages with more complex plural rules.
| Plural | 8,473,816 | 12 |
My question is similar to How to add regular string placeholders to a translated plurals .stringdict in swift ios but I am trying to understand if it is possible to pass 2 int parameters to strings dict.
Say if I want to translate something like:
1 apple : 3 pears
2 apples : 1 pear
Is it possible to do it in one localized format string like:
let apples = 1
let pears = 3
let applesAndPears = String.localizedStringWithFormat(<format>, apples, pears)
print(applesAndPears)
or do I have to combine them separately?
| One format is sufficient. You can use multiple placeholders in the NSStringLocalizedFormatKey entry, and for each placeholder a separate dictionary with the plural rule. Example:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>apples_and_pears</key>
<dict>
<key>NSStringLocalizedFormatKey</key>
<string>%#@num_apples@ : %#@num_pears@</string>
<key>num_apples</key>
<dict>
<key>NSStringFormatSpecTypeKey</key>
<string>NSStringPluralRuleType</string>
<key>NSStringFormatValueTypeKey</key>
<string>ld</string>
<key>zero</key>
<string>no apple</string>
<key>one</key>
<string>1 apple</string>
<key>other</key>
<string>%ld apples</string>
</dict>
<key>num_pears</key>
<dict>
<key>NSStringFormatSpecTypeKey</key>
<string>NSStringPluralRuleType</string>
<key>NSStringFormatValueTypeKey</key>
<string>ld</string>
<key>zero</key>
<string>no pear</string>
<key>one</key>
<string>1 pear</string>
<key>other</key>
<string>%ld pears</string>
</dict>
</dict>
</dict>
</plist>
Usage:
let apples = 1
let pears = 3
let format = NSLocalizedString("apples_and_pears", comment: "")
let applesAndPears = String.localizedStringWithFormat(format, apples, pears)
print(applesAndPears) // 1 apple : 3 pears
This can be combined with positional parameters: if the format is changed to
<key>NSStringLocalizedFormatKey</key>
<string>%2$#@num_pears@ : %1$#@num_apples@</string>
then the output becomes “3 pears : 1 apple”.
| Plural | 55,752,969 | 12 |
I am trying to build and deploy microservices images to a single-node Kubernetes cluster running on my development machine using minikube. I am using the cloud-native microservices demo application Online Boutique by Google to understand the use of technologies like Kubernetes, Istio etc.
Link to github repo: microservices-demo
While following the installation process, and on running command skaffold run to build and deploy my application, I get some errors:
Step 10/11 : RUN apt-get -qq update && apt-get install -y --no-install-recommends curl
---> Running in 43d61232617c
W: GPG error: http://deb.debian.org/debian buster InRelease: At least one invalid signature was encountered.
E: The repository 'http://deb.debian.org/debian buster InRelease' is not signed.
W: GPG error: http://deb.debian.org/debian buster-updates InRelease: At least one invalid signature was encountered.
E: The repository 'http://deb.debian.org/debian buster-updates InRelease' is not signed.
W: GPG error: http://security.debian.org/debian-security buster/updates InRelease: At least one invalid signature was encountered.
E: The repository 'http://security.debian.org/debian-security buster/updates InRelease' is not signed.
failed to build: couldn't build "loadgenerator": unable to stream build output: The command '/bin/sh -c apt-get -qq update && apt-get install -y --no-install-recommends curl' returned a non-zero code: 100
I receive these errors when trying to build loadgenerator.
How can I resolve this issue?
| There are a few reasons why you encounter these errors:
There might be an issue with the existing cache and/or disc space. In order to fix it you need to clear the APT cache by executing: sudo apt-get clean and sudo apt-get update.
The same goes with existing docker images. Execute: docker image prune -f and docker container prune -f in order to remove unused data and free disc space. Executing docker image prune -f will delete all the unused images. To delete some selective images of large size, run docker images and identify the images you want to remove, and then run docker rmi -f <IMAGE-ID1> <IMAGE-ID2> <IMAGE-ID3>.
If you don't care about the security risks, you can try to run the apt-get command with the --allow-unauthenticated or --allow-insecure-repositories flag. According to the docs:
Ignore if packages can't be authenticated and don't prompt about it.
This can be useful while working with local repositories, but is a
huge security risk if data authenticity isn't ensured in another way
by the user itself.
Please let me know if that helped.
| Skaffold | 62,473,932 | 221 |
I work with teams members to develop a microservices architecture but I have a problem with the way to work. Indeed, I have too many microservices and when I run them during my development, it consumes too memory even with a good workstation. So I use docker compose to build and execute my MSA but it takes a long time. One often hears about how technically build an MSA but never about the way to work efficiently to build it. How do you do in this case ? How do you work ? Do you use tools or any others to improve and facilitate your developments. I've heard about skaffold but I don't see what the difference is with docker compose or with a simple ci/cd in a cluster env for example. Feel free to give tips and your opinion. Thanks
| I've had a fair amount of experience with microservices and local development and here's been some approaches I've seen:
Run all the things locally on docker or k8. If using k8, then a tool like skaffolding can make it easier to run and debug a service locally in the IDE but put it into your local k8 so that it can communicate with other k8 services. It works OK but running more than 4 or 5 full services locally in k8 or docker requires dedicating a substantial amount of CPU and memory.
Build mock versions of all your services. Use those locally and for integration tests. The mock services are intentionally much simpler and therefore easier to run lots of them locally. Obvious downside is that you have to build mock version of every service, and you can easily miss bugs that are caused by mock services not behaving like the real service. Record/replay tools like Hoveryfly can help in building mock services.
Give every developer their own Cloud environment. Run most services in the cloud but use a tool like Telepresence to swap locally running services in and out of the cloud cluster. This eliminates the problem of running too many services on a single machine but can be spendy to maintain separate cloud sandboxes for each developer. You also need a DevOps resource to help developers when their cloud sandbox gets out of whack.
Eliminate unnecessary microservice complexity and consolidate all your services into 1 or 2 monoliths. Enjoy being able to run everything locally as a single service. Accept the fact that a microservice architecture is overkill for most companies. Too many people choose a microservice architecture upfront before their needs demand it. Or they do it out of fear that they will need it in the future. Inevitably this leads to guessing how they should decompose the system into many microservices, and getting the boundaries and contracts wrong, which makes it just as hard or harder to fix in the future compared to a monolith. And they incur the costs of microservices years before they need to. Microservices make everything more costly and painful, from local development to deployment. For companies like Netflix and Amazon, it's necessary. For most of us, it's not.
I prefer option 4 if at all possible. Otherwise option 2 or 3 in that order. Option 1 should be avoided in my opinion but it is probably the option everyone tries first.
| Skaffold | 64,251,489 | 43 |
I'm using Redis-server for windows ( 2.8.4 - MSOpenTech) / windows 8 64bit.
It is working great , but even after I run :
I see this : (and here are my questions)
When Redis-server.exe is up , I see 3 large files :
When Redis-server.exe is down , I see 2 large files :
Question :
— Didn't I just tell it to erase all DB ? so why are those 2/3 huge files are still there ?
How can I completely erase those files? ( without re-generating)
NB
It seems that it is doing deletion of keys without freeing occupied space. if so , How can I free this unused space?
| From https://github.com/MSOpenTech/redis/issues/83
"Redis uses the fork() UNIX system API to create a point-in-time snapshot of the data store for storage to disk. This impacts several features on Redis: AOF/RDB backup, master-slave synchronization, and clustering. Windows does not have a fork-like API available, so we have had to simulate this behavior by placing the Redis heap in a memory mapped file that can be shared with a child(quasi-forked) process. By default we set the size of this file to be equal to the size of physical memory. In order to control the size of this file we have added a maxheap flag. See the Redis.Windows.conf file in msvs\setups\documentation (also included with the NuGet and Chocolatey distributions) for details on the usage of this flag. "
| Redis | 23,662,131 | 27 |
I am unable to run the resque-web on my server due to some issues I still have to work on but I still have to check and retry failed jobs in my resque queues.
Has anyone any experience on how to peek the failed jobs queue to see what the error was and then how to retry it using the redis-cli command line?
thanks,
| Found a solution on the following link:
http://ariejan.net/2010/08/23/resque-how-to-requeue-failed-jobs
In the rails console we can use these commands to check and retry failed jobs:
1 - Get the number of failed jobs:
Resque::Failure.count
2 - Check the errors exception class and backtrace
Resque::Failure.all(0,20).each { |job|
puts "#{job["exception"]} #{job["backtrace"]}"
}
The job object is a hash with information about the failed job. You may inspect it to check more information. Also note that this only lists the first 20 failed jobs. Not sure how to list them all so you will have to vary the values (0, 20) to get the whole list.
3 - Retry all failed jobs:
(Resque::Failure.count-1).downto(0).each { |i| Resque::Failure.requeue(i) }
4 - Reset the failed jobs count:
Resque::Failure.clear
retrying all the jobs do not reset the counter. We must clear it so it goes to zero.
| Redis | 8,798,357 | 27 |
I'd like to use Redis features such as bitfields and hashfields from an MVC controller. I understand there's built in caching support in ASP.NET core but this only supports basic GET and SET commands, not the commands that I need in my application. I know how to use StackExchange.Redis from a normal (eg. console) application, but I'm not sure how to set it up in an ASP site.
Where should I put all the connection initialization code so that I can have access to it afterwards from a controller? Is this something I would use dependency injection for?
| In your Startup class's ConfigureServices method, you'll want to add:
services.AddSingleton<IConnectionMultiplexer>(ConnectionMultiplexer.Connect("yourConnectionString"));
You can then use the dependency injection by changing your constructor signature to something like this:
public YourController : Controller
{
private readonly IConnectionMultiplexer _connectionMultiplexer;
public YourController(IConnectionMultiplexer multiplexer)
{
this._connectionMultiplexer = multiplexer;
}
}
| Redis | 46,368,234 | 26 |
There is a very good SQL client solution for Linux users DBeaver. In spec, it is said that it supports MongoDB and Redis databases.However, there are no such drivers in "New connection" window. Does anyone know how to connect to Mongo or Redis?
| The Enterprise edition has MongoDB and Redis support.
EE download
We have split standalone version on Community and Enterprise editions.
Community edition includes the same extensions as DBeaver 2.x.
Enterprise edition = Community edition + NoSQL support (Cassandra and
MongoDB in 3.0). Both Community and Enterprise editions are free and
open source. New Cassandra and MongoDB extensions are not open source
(but free to use).
| Redis | 40,256,591 | 26 |
I have installed redis with laravel by adding "predis/predis":"~1.0",
Then for testing i added the following code :
public function showRedis($id = 1)
{
$user = Redis::get('user:profile:'.$id);
Xdd($user);
}
In app/config/database.php i have :
'redis' => [
'cluster' => false,
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
It throws the following error : No connection could be made because the target machine actively refused it. [tcp://127.0.0.1:6379]
I using virtualhost for the project.
Using Xampp with windows.
| I had this issue in Ubuntu 18.04
I installed redis in my local system, got solved.
sudo apt-get install redis-server
| Redis | 38,604,524 | 26 |
The Basic Usage documentation for StackExchange.Redis explains that the ConnectionMultiplexer is long-lived and is expected to be reused.
But what about when the connection to the server is broken? Does ConnectionMultiplexer automatically reconnect, or is it necessary to write code as in this answer (quoting that answer):
if (RedisConnection == null || !RedisConnection.IsConnected) {
RedisConnection = ConnectionMultiplexer.Connect(...);
}
RedisCacheDb = RedisConnection.GetDatabase();
Is the above code something good to handle recovery from disconnects, or would it actually result in multiple ConnectionMultiplexer instances? Along the same lines, how should the IsConnected property be interpreted?
[Aside: I believe the above code is a pretty bad form of lazy initialization, particularly in multithreaded environments - see Jon Skeet's article on Singletons].
| Here is the pattern recommended by the Azure Redis Cache team:
private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => {
return ConnectionMultiplexer.Connect("mycache.redis.cache.windows.net,abortConnect=false,ssl=true,password=...");
});
public static ConnectionMultiplexer Connection {
get {
return lazyConnection.Value;
}
}
A few important points:
It uses Lazy<T> to handle thread-safe initialization
It sets "abortConnect=false", which means if the initial connect attempt fails, the ConnectionMultiplexer will silently retry in the background rather than throw an exception.
It does not check the IsConnected property, since ConnectionMultiplexer will automatically retry in the background if the connection is dropped.
| Redis | 28,792,196 | 26 |
I would like to be notified when a volatile key expires in my redis store. The redis website provides some description of how this might be achieved in http://redis.io/topics/notifications, but I am wondering if it can be done using the python redis api.
After setting:notify-keyspace-events Ex in my redis.conf file
and running this as a test:
import redis
import config
client = redis.StrictRedis.from_url(config.REDIS_URI)
client.set_response_callback('EXPIRE',callback)
client.set('a', 1)
client.expire('a',5)
callback() only gets called when client.expire('a',5) gets called, but not five seconds later as expected
| The surprise (no expiration events seen when time to live for a key reaches zero) is not bound to Python, but rather to the way, Redis is expiring keys.
Redis doc on Timing of expired events
Timing of expired events
Keys with a time to live associated are expired by Redis in two ways:
When the key is accessed by a command and is found to be expired.
Via a background system that looks for expired keys in background, incrementally, in order to be able to also collect keys that are never accessed.
The expired events are generated when a key is accessed and is found to be expired by one of the above systems, as a result there are no guarantees that the Redis server will be able to generate the expired event at the time the key time to live reaches the value of zero.
If no command targets the key constantly, and there are many keys with a TTL associated, there can be a significant delay between the time the key time to live drops to zero, and the time the expired event is generated.
Basically expired events are generated when the Redis server deletes the key and not when the time to live theoretically reaches the value of zero.
Small test on console
when Redis running ($ sudo service redis-server start)
I started one console and have subscribed:
$ redis-cli
PSUBSCRIBE "__key*__:*"
Then, in another console:
$ redis-cli
> config set notify-keyspace-events AKE
what shall subscribe to all kinds of events
Then I continued with experiments in this second console:
> set aaa aaa
> del aaa
> set aaa ex 5
> get aaa
All the activities were seen in subscribed console. Only the key expiration was sometime few seconds delayed, sometime came just in time.
Note alse, there are subtle differences in messages, one message __keyevent@0__:expire another __keyevent@0__:expired.
Sample listener spy.py
import redis
import time
r = redis.StrictRedis()
pubsub = r.pubsub()
pubsub.psubscribe("*")
for msg in pubsub.listen():
print time.time(), msg
This code registers to all existing channels in default redis and prints whatever gets published.
Run it:
$ python spy.py
and in another console try to set a key with an expiration. You will see all the events.
For following redis-cli input.
$ redis-cli
127.0.0.1:6379> set a aha
OK
127.0.0.1:6379> set b bebe ex 3
OK
127.0.0.1:6379> set b bebe ex 3
OK
we get spy output:
1401548400.27 {'pattern': None, 'type': 'psubscribe', 'channel': '*', 'data': 1L}
1401548428.36 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyspace@0__:a', 'data': 'set'}
1401548428.36 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyevent@0__:set', 'data': 'a'}
1401548436.8 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyspace@0__:b', 'data': 'set'}
1401548436.8 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyevent@0__:set', 'data': 'b'}
1401548436.8 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyspace@0__:b', 'data': 'expire'}
1401548436.8 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyevent@0__:expire', 'data': 'b'}
1401548439.82 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyspace@0__:b', 'data': 'expired'}
1401548439.82 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyevent@0__:expired', 'data': 'b'}
1401548484.46 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyspace@0__:b', 'data': 'set'}
1401548484.46 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyevent@0__:set', 'data': 'b'}
1401548484.46 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyspace@0__:b', 'data': 'expire'}
1401548484.46 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyevent@0__:expire', 'data': 'b'}
1401548487.51 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyspace@0__:b', 'data': 'expired'}
1401548487.51 {'pattern': '*', 'type': 'pmessage', 'channel': '__keyevent@0__:expired', 'data': 'b'}
| Redis | 23,964,548 | 26 |
I have my config file at:
root/config/redis.rb
I start redis like this: redis-server
How do I start redis so that it uses my config file?
Also, I hate mucking about with ps -grep to try and find a pid to shut it down. How can I shut down the server by cd'ing into the root and running just one command?
With the puma application server you can run commands like this:
pumactl -F config/puma.rb start
pumactl -F config/puma.rb stop
and pumactl infers the pid from the conf. Can this be the same with redis?
Also I've used this copy-pasted conf from the redis website:
# Redis configuration file example
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis server but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################ GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize no
# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
pidfile /var/run/redis.pid
# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
tcp-keepalive 0
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving at all commenting all the "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
################################# REPLICATION #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. Note that the configuration is local to the slave
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
#
# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The biggest the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEES that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are not suitable keys for eviction.
#
# At the date of writing this commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy volatile-lru
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can select as well the sample
# size to check. For instance for default Redis will check three keys and
# pick the one that was used less recently, you can change the sample size
# using the following configuration directive.
#
# maxmemory-samples 3
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64
aof-rewrite-incremental-fsync yes
And I've tweaked it to daemonize redis when it starts. Just wondering if there are any gotchas here? I've got a server that serves about 30 people simultaneously, and all I'm using redis for is sending asyncronous emails to people. Should this config do the trick?
| Okay, redis is pretty user friendly but there are some gotchas.
Here are just some easy commands for working with redis on Ubuntu:
install:
sudo apt-get install redis-server
start with conf:
sudo redis-server <path to conf>
sudo redis-server config/redis.conf
stop with conf:
redis-cli shutdown
(not sure how this shuts down the pid specified in the conf. Redis must save the path to the pid somewhere on boot)
log:
tail -f /var/log/redis/redis-server.log
Also, various example confs floating around online and on this site were beyond useless. The best, sure fire way to get a compatible conf is to copy-paste the one your installation is already using. You should be able to find it here:
/etc/redis/redis.conf
Then paste it at <path to conf>, tweak as needed and you're good to go.
| Redis | 23,496,546 | 26 |
The use case is to use Redis to be local cache of MySQL
The data format in MySQL is: a single primary key and several other fields. There will not be queries cross table of db
Redis key is primary key in MySQL, and value is hash containing other fields in MySQL
When power off, less than one minute data lose is acceptable.
My solution is:
Redis writes AOF file, some process will monitor this file and sync the updated datas to MySQL
Hack Redis to write AOF in several files, just like MySQL binlog
The data interface will only read and write through Redis
Is this solution OK?
And what's the best strategy to do this job?
| You don't need to hack anything ;)
I am not entirely sure why you need the data on mysql. If I knew, maybe there would be a more suitable answer. In any case, as a generic answer you can use redis keyspace notifications
You could subscribe to the commands HSET, HMSET, HDEL and DEL on your keys, so you would get a notification everytime a key is deleted or a hash value is set or removed.
Note if you miss any notification you would have an inconsistency. So once in a while you could just use the SCAN command to go through all your keys and check on mysql if they need to be updated.
Another strategy could be maintaining two separate structures. One would be the hash with the values, and the other would be a ZSET of all the values sorted by timestamp of update. The best way to keep both structures up to date would be to write two or three lua scripts (insert/update and delete) that would operate on the hash and the zset atomically.
Then you can just periodically query the ZSET for the elements with a timestamp higher than your last sync operation, get all the keys that were updated (it would include deleted keys, unless you want to keep a second ZSET exclusively for those) and then just retrieve all the elements by key and sync to mysql.
Hope it will work for you!
| Redis | 23,080,557 | 26 |
Requirement :
Python objects with 2-3 levels of nesting containing basic datypes like integers,strings, lists, and dicts.
( no dates etc), needs to be stored as json in redis against a key.
What are the best methods available for compressing json as a string for low memory footprint.
The target objects are not very large, having 1000 small elements on average,
or about 15000 characters when converted to JSON.
eg.
>>> my_dict
{'details': {'1': {'age': 13, 'name': 'dhruv'}, '2': {'age': 15, 'name': 'Matt'}}, 'members': ['1', '2']}
>>> json.dumps(my_dict)
'{"details": {"1": {"age": 13, "name": "dhruv"}, "2": {"age": 15, "name": "Matt"}}, "members": ["1", "2"]}'
### SOME BASIC COMPACTION ###
>>> json.dumps(my_dict, separators=(',',':'))
'{"details":{"1":{"age":13,"name":"dhruv"},"2":{"age":15,"name":"Matt"}},"members":["1","2"]}'
1/ Are there any other better ways to compress json to save memory in redis ( also ensuring light weight decoding afterwards ).
2/ How good a candidate would be msgpack [http://msgpack.org/] ?
3/ Shall I consider options like pickle as well ?
| We just use gzip as a compressor.
import gzip
import cStringIO
def decompressStringToFile(value, outputFile):
"""
decompress the given string value (which must be valid compressed gzip
data) and write the result in the given open file.
"""
stream = cStringIO.StringIO(value)
decompressor = gzip.GzipFile(fileobj=stream, mode='r')
while True: # until EOF
chunk = decompressor.read(8192)
if not chunk:
decompressor.close()
outputFile.close()
return
outputFile.write(chunk)
def compressFileToString(inputFile):
"""
read the given open file, compress the data and return it as string.
"""
stream = cStringIO.StringIO()
compressor = gzip.GzipFile(fileobj=stream, mode='w')
while True: # until EOF
chunk = inputFile.read(8192)
if not chunk: # EOF?
compressor.close()
return stream.getvalue()
compressor.write(chunk)
In our usecase we store the result as files, as you can imagine. To use just in-memory strings, you can use a cStringIO.StringIO() object as a replacement for the file as well.
| Redis | 15,525,837 | 26 |
I need to see what redis gets/sets in the redis log.
I tried to set the redis log level to debug and verbose.
This does not show me anything when I set a value.
| Unless it's important that you get in the log, in which case I don't think I can help you, you should be able to use the MONITOR command:
MONITOR is a debugging command that streams back every command processed by the Redis server. It can help in understanding what is happening to the database. This command can both be used via redis-cli and via telnet.
You could do something like:
redis-cli monitor | grep -E ' "(g|s)et" '
Note that there is a performance cost, though (it's mentioned in the linked docs as roughly 50%).
Pipe it to a file:
redis-cli monitor | grep -E ' "(g|s)et" ' > redis_get_set.log
| Redis | 14,713,084 | 26 |
I use INCR and EXPIRE to implement rate limiting, e.g., 5 requests per minute:
if EXISTS counter
count = INCR counter
else
EXPIRE counter 60
count = INCR counter
if count > 5
print "Exceeded the limit"
However, 5 requests can be sent at the last second minute one and 5 more requests at the first second of minute two, i.e., 10 requests in two seconds.
How can this problem be avoided?
Update: I came up with this list implementation. Is this a good way to do it?
times = LLEN counter
if times < 5
LPUSH counter now()
else
time = LINDEX counter -1
if now() - time < 60
print "Exceeded the limit"
else
LPUSH counter now()
LTRIM counter 5
| You could switch from "5 requests in the last minute" to "5 requests in minute x". By this it would be possible to do:
counter = current_time # for example 15:03
count = INCR counter
EXPIRE counter 60 # just to make sure redis doesn't store it forever
if count > 5
print "Exceeded the limit"
If you want to keep using "5 requests in the last minute", then you could do
counter = Time.now.to_i # this is Ruby and it returns the number of milliseconds since 1/1/1970
key = "counter:" + counter
INCR key
EXPIRE key 60
number_of_requests = KEYS "counter"*"
if number_of_requests > 5
print "Exceeded the limit"
If you have production constraints (especially performance), it is not advised to use the KEYS keyword. We could use sets instead:
counter = Time.now.to_i # this is Ruby and it returns the number of milliseconds since 1/1/1970
set = "my_set"
SADD set counter 1
members = SMEMBERS set
# remove all set members which are older than 1 minute
members {|member| SREM member if member[key] < (Time.now.to_i - 60000) }
if (SMEMBERS set).size > 5
print "Exceeded the limit"
This is all pseudo Ruby code, but should give you the idea.
| Redis | 13,175,050 | 26 |
I am trying to store a wordlist in redis. The performance is great.
My approach is of making a set called "words" and adding each new word via 'sadd'.
When adding a file thats 15.9 MB and contains about a million words, the redis-server process consumes 160 MB of ram. How come I am using 10x the memory, is there any better way of approaching this problem?
| Well this is expected of any efficient data storage: the words have to be indexed in memory in a dynamic data structure of cells linked by pointers. Size of the structure metadata, pointers and memory allocator internal fragmentation is the reason why the data take much more memory than a corresponding flat file.
A Redis set is implemented as a hash table. This includes:
an array of pointers growing geometrically (powers of two)
a second array may be required when incremental rehashing is active
single-linked list cells representing the entries in the hash table (3 pointers, 24 bytes per entry)
Redis object wrappers (one per value) (16 bytes per entry)
actual data themselves (each of them prefixed by 8 bytes for size and capacity)
All the above sizes are given for the 64 bits implementation. Accounting for the memory allocator overhead, it results in Redis taking at least 64 bytes per set item (on top of the data) for a recent version of Redis using the jemalloc allocator (>= 2.4)
Redis provides memory optimizations for some data types, but they do not cover sets of strings. If you really need to optimize memory consumption of sets, there are tricks you can use though. I would not do this for just 160 MB of RAM, but should you have larger data, here is what you can do.
If you do not need the union, intersection, difference capabilities of sets, then you may store your words in hash objects. The benefit is hash objects can be optimized automatically by Redis using zipmap if they are small enough. The zipmap mechanism has been replaced by ziplist in Redis >= 2.6, but the idea is the same: using a serialized data structure which can fit in the CPU caches to get both performance and a compact memory footprint.
To guarantee the hash objects are small enough, the data could be distributed according to some hashing mechanism. Assuming you need to store 1M items, adding a word could be implemented in the following way:
hash it modulo 10000 (done on client side)
HMSET words:[hashnum] [word] 1
Instead of storing:
words => set{ hi, hello, greetings, howdy, bonjour, salut, ... }
you can store:
words:H1 => map{ hi:1, greetings:1, bonjour:1, ... }
words:H2 => map{ hello:1, howdy:1, salut:1, ... }
...
To retrieve or check the existence of a word, it is the same (hash it and use HGET or HEXISTS).
With this strategy, significant memory saving can be done provided the modulo of the hash is
chosen according to the zipmap configuration (or ziplist for Redis >= 2.6):
# Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given number of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
Beware: the name of these parameters have changed with Redis >= 2.6.
Here, modulo 10000 for 1M items means 100 items per hash objects, which will guarantee that all of them are stored as zipmaps/ziplists.
| Redis | 10,004,565 | 26 |
Does anyone have a solid pattern fetching Redis via BookSleeve library?
I mean:
BookSleeve's author @MarcGravell recommends not to open & close the connection every time, but rather maintain one connection throughout the app. But how can you handle network breaks? i.e. the connection might be opened successfully in the first place, but when some code tries to read/write to Redis, there is the possibility that the connection has dropped and you must reopen it (and fail gracefully if it won't open - but that is up to your design needs.)
I seek for code snippet(s) that cover general Redis connection opening, and a general 'alive' check (+ optional awake if not alive) that would be used before each read/write.
This question suggests a nice attitude to the problem, but it's only partial (it does not recover a lost connection, for example), and the accepted answer to that question draws the right way but does not demonstrate concrete code.
I hope this thread will get solid answers and eventually become a sort of a Wiki with regards to BookSleeve use in .Net applications.
-----------------------------
IMPORTANT UPDATE (21/3/2014):
-----------------------------
Marc Gravell (@MarcGravell) / Stack Exchange have recently released the StackExchange.Redis library that ultimately replaces Booksleeve. This new library, among other things, internally handles reconnections and renders my question redundant (that is, it's not redundant for Booksleeve nor my answer below, but I guess the best way going forward is to start using the new StackExchange.Redis library).
| Since I haven't got any good answers, I came up with this solution (BTW thanks @Simon and @Alex for your answers!).
I want to share it with all of the community as a reference. Of course, any corrections will be highly appreciated.
using System;
using System.Net.Sockets;
using BookSleeve;
namespace Redis
{
public sealed class RedisConnectionGateway
{
private const string RedisConnectionFailed = "Redis connection failed.";
private RedisConnection _connection;
private static volatile RedisConnectionGateway _instance;
private static object syncLock = new object();
private static object syncConnectionLock = new object();
public static RedisConnectionGateway Current
{
get
{
if (_instance == null)
{
lock (syncLock)
{
if (_instance == null)
{
_instance = new RedisConnectionGateway();
}
}
}
return _instance;
}
}
private RedisConnectionGateway()
{
_connection = getNewConnection();
}
private static RedisConnection getNewConnection()
{
return new RedisConnection("127.0.0.1" /* change with config value of course */, syncTimeout: 5000, ioTimeout: 5000);
}
public RedisConnection GetConnection()
{
lock (syncConnectionLock)
{
if (_connection == null)
_connection = getNewConnection();
if (_connection.State == RedisConnectionBase.ConnectionState.Opening)
return _connection;
if (_connection.State == RedisConnectionBase.ConnectionState.Closing || _connection.State == RedisConnectionBase.ConnectionState.Closed)
{
try
{
_connection = getNewConnection();
}
catch (Exception ex)
{
throw new Exception(RedisConnectionFailed, ex);
}
}
if (_connection.State == RedisConnectionBase.ConnectionState.Shiny)
{
try
{
var openAsync = _connection.Open();
_connection.Wait(openAsync);
}
catch (SocketException ex)
{
throw new Exception(RedisConnectionFailed, ex);
}
}
return _connection;
}
}
}
}
| Redis | 8,645,953 | 26 |
I need to save a User model, something like:
{ "nickname": "alan",
"email": ...,
"password":...,
...} // and a couple of other fields
Today, I use a Set: users
In this Set, I have a member like user:alan
In this member I have the hash above
This is working fine but I was just wondering if instead of the above approach that could make sense to use the following one:
Still use users Set (to easily get the users (members) list)
In this set only use a key / value storage like:
key: alan
value : the stringify version of the above user hash
Retrieving a record would then be easier (I will then have to Parse it with JSON).
I'm very new to redis and I am not sure what could be the best. What do you think ?
| You can use Redis hashes data structure to store your JSON object fields and values. For example your "users" set can still be used as a list which stores all users and your individual JSON object can be stored into hash like this:
db.hmset("user:id", JSON.stringify(jsonObj));
Now you can get by key all users or only specific one (from which you get/set only specified fields/values). Also these two questions are probably related to your scenario.
EDIT: (sorry I didn't realize that we talked about this earlier)
Retrieving a record would then be easier (I will then have to Parse it with JSON).
This is true, but with hash data structure you can get/set only the field/value which you need to work with. Retrieving entire JSON object can result in decrease of performance (depends on how often you do it) if you only want to change part of the object (other thing is that you will need to stringify/parse the object everytime).
| Redis | 5,729,891 | 26 |
My team wants to move to microservices architecture. Currently we are using Redis Pub/Sub as message broker for some legacy parts of our system. My colleagues think that it is naturally to continue use redis as service bus as they don't want spend their time on studying new product. But in my opinion RabbitMQ (especially with MassTransit) is a better approach for microservices. Could you please compare Redis Pub/Sub with Rabbit MQ and give me some arguments for Rabbit?
| Redis is a fast in-memory key-value store with optional persistence. The pub/sub feature of Redis is a marginal case for Redis as a product.
RabbitMQ is the message broker that does nothing else. It is optimized for reliable delivery of messages, both in command style (send to an endpoint exchange/queue) and publish-subscribe. RabbitMQ also includes the management plugin that delivers a helpful API to monitor the broker status, check the queues and so on.
Dealing with Redis pub/sub on a low level of Redis client can be a very painful experience. You could use a library like ServiceStack that has a higher level abstraction to make it more manageable.
However, MassTransit adds a lot of value compared to raw messaging over RMQ. As soon as you start doing stuff for real, no matter what transport you decide to use, you will hit typical issues that are associated with messaging like handling replies, scheduling, long-running processes, re-delivery, dead-letter queues, and poison queues. MassTransit does it all for you. Neither Redis or RMQ client would deliver any of those. If your team wants to spend time dealing with those concerns in their own code - that's more like reinventing the wheel. Using the argument of "not willing to learn a new product" in this context sounds a bit weird, since, instead of delivering value for the product, developers want to spend their time dealing with infrastructure concerns.
| Redis | 52,592,796 | 25 |
Background
I am making a publish/subscribe typical application where a publisher sends messages to a consumer.
The publisher and the consumer are on different machines and the connection between them can break occasionally.
Objective
The goal here is to make sure that no matter what happens to the connection, or to the machines themselves, a message sent by a publisher is always received by the consumer.
Ordering of messages is not a must.
Problem
According to my research, RabbitMQ is the right choice for this scenario:
Redis Vs RabbitMQ as a data broker/messaging system in between Logstash and elasticsearch
However, although RabbitMQ has a tutorial about publish and subscriber this tutorial does not present us to persistent queues nor does it mention confirms which I believe are the key to making sure messages are delivered.
On the other hand, Redis is also capable of doing this:
http://abhinavsingh.com/customizing-redis-pubsub-for-message-persistence-part-2/
but I couldn't find any official tutorials or examples and my current understatement leads to me to believe that persistent queues and message confirms must be done by us, as Redis is mainly an in memory-datastore instead of a message broker like RabbitMQ.
Questions
For this use case, which solution would be the easiest to implement? (Redis solution or RabbitMQ solution?)
Please provide a link to an example with what you think would be best!
| Background
I originally wanted publish and subscribe with message and queue persistence.
This in theory, does not exactly fit publish and subscribe:
this pattern doesn't care if the messages are received or not. The publisher simply fans out messages and if there are any subscribers listening, good, otherwise it doesn't care.
Indeed, looking at my needs I would need more of a Work Queue pattern, or even an RPC pattern.
Analysis
People say both should be easy, but that really is subjective.
RabbitMQ has a better official documentation overall with clear examples in most languages, while Redis information is mainly in third party blogs and in sparse github repos - which makes it considerably harder to find.
As for the examples, RabbitMQ has two examples that clearly answer my questions:
Work queues
RPC example
By mixing the two I was able to have a publisher send to several consumers reliable messages - even if one of them fails. Messages are not lost, nor forgotten.
Downfall of rabbitMQ:
The greatest problem of this approach is that if a consumer/worker crashes, you need to define the logic yourself to make sure that tasks are not lost. This happens because once a task is completed, following the RPC pattern with durable queues from Work Queues, the server will keep sending messages to the worker until it comes back up again. But the worker doesn't know if it already read the reply from the server or not, so it will take several ACK from the server. To fix this, each worker message needs to have an ID, that you save to the disk (in case of failure) or the requests must be idempotent.
Another issue is that if the connection is lost, the clients blow up with errors as they cannot connect. This is also something you must prepare in advance.
As for redis, it has a good example of durable queues in this blog:
https://danielkokott.wordpress.com/2015/02/14/redis-reliable-queue-pattern/
Which follows the official recommendation. You can check the github repo for more info.
Downfall of redis:
As with rabbitmq, you also need to handle worker crashes yourself, otherwise tasks in progress will be lost.
You have to do polling. Each consumer needs to ask the producer if there are any news, every X seconds.
This is, in my opinion, a worst rabbitmq.
Conclusion
I ending up going with rabbitmq for the following reasons:
More robust official online documentation, with examples.
No need for consumers to poll the producer.
Error handling is just as simple as in redis.
With this in mind, for this specific case, I am confident in saying that redis is a worst rabbitmq in this scenario.
Hope it helps.
| Redis | 43,777,807 | 25 |
We know that ElastiCache is not recommended to be accessed outside Amazon instances, so we're trying below stuff inside Amazon EC2 instances only.
We've got a ElastiCache Redis Cluster with 9 nodes. When we try to connect to it using normal redis implementation, it throws some Moved errors
Have tried the retry strategy method as per @Miller. Have also tried RedisCluster with unstable and stable (poor man) implementations.
None of these implementations are working. Any suggestions please?
|
Sharing the code for future readers:
var RedisClustr = require('redis-clustr');
var RedisClient = require('redis');
var config = require("./config.json");
var redis = new RedisClustr({
servers: [
{
host: config.redisClusterHost,
port: config.redisClusterPort
}
],
createClient: function (port, host) {
// this is the default behaviour
return RedisClient.createClient(port, host);
}
});
//connect to redis
redis.on("connect", function () {
console.log("connected");
});
//check the functioning
redis.set("framework", "AngularJS", function (err, reply) {
console.log("redis.set " , reply);
});
redis.get("framework", function (err, reply) {
console.log("redis.get ", reply);
});
| Redis | 43,872,852 | 25 |
If I run the redis:alpine Docker image using the commmand
docker run redis:alpine
I see several warnings:
1:C 08 May 08:29:32.308 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.2.8 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 08 May 08:29:32.311 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 08 May 08:29:32.311 # Server started, Redis version 3.2.8
1:M 08 May 08:29:32.311 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 08 May 08:29:32.311 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 08 May 08:29:32.311 * The server is now ready to accept connections on port 6379
I've tried to fix the first two of these warnings using the following Dockerfile:
FROM redis:alpine
COPY somaxconn /proc/sys/net/core/somaxconn
COPY sysctl.conf /etc/sysctl.conf
CMD ["redis-server", "--appendonly", "yes"]
where my local file somaxconn contains the single entry 511 and sysctl.conf contains the line vm.overcommit_memory = 1. However, I still get the same warnings when I build and run the container.
How can I get rid of these warnings? (There is mention of the issues in https://www.techandme.se/performance-tips-for-redis-cache-server/ but the solution described there, involving modifying rc.local, seems to pertain to Rasperry Pi).
| Bad way to handle things: /proc is read-only filesystem to modify it you can run Docker in privileged mode than you can modify it after the container was started.
If running the container in privileged mode, you can disable THP using these commands:
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
# echo never > /sys/kernel/mm/transparent_hugepage/defrag
Proper way: Ensure that you run newer versions of Docker (upgrade if needed). run subcommand has the --sysctl option:
$ docker run -ti --sysctl net.core.somaxconn=4096 --rm redis:alpine /bin/sh
root@9e850908ddb7:/# sysctl net.core.somaxconn
net.core.somaxconn = 4096
...
Unfortunately: vm.overcommit_memory is currently not allowed to be set via --sysctl paramter the same applies to THP (transparent_hugepage), this is because they are not namespaced. Thus to fix these warning in a container running on a Linux Host you can change them directly on host. Here the related Issues:
#19
#55
You don't need privileged mode for the proper way approach.
| Redis | 43,843,197 | 25 |
Hello when trying to use spring-redis i am getting
java.lang.NoClassDefFoundError: Could not initialize class org.springframework.data.redis.connection.jedis.JedisConnection
exception when doing any connection operation using redis. My config method goes like this
@Bean
public RedisConnectionFactory jedisConnFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
jedisConnectionFactory.setHostName("XXX.XX.XX.XXX");
jedisConnectionFactory.setPort(6381);
jedisConnectionFactory.setUsePool(true);
jedisConnectionFactory.afterPropertiesSet();
return jedisConnectionFactory;
Please suggest if anyone knows why i am getting this exception.
| After wasting almost one day and finding that the jar is already on my class path, i further debugged it and found that when java's reflection mechanism was trying to find a method which was already present in the "methods list" it was not able to find due to some version conflict between Jedis version (2.7.2) not compatible with Spring Data Redis (1.5.0.RELEASE) , this issue has already been answered in this link ::
Jedis and spring data redis version conflict
| Redis | 33,128,318 | 25 |
I wish to install redis on my red-hat environment. I do the following:
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
I got the next error:
make[3]: *** [net.o] Error 127
make[3]: Leaving directory `/tmp/redis-stable/deps/hiredis'
make[2]: *** [hiredis] Error 2
make[2]: Leaving directory `/tmp/redis-stable/deps'
make[1]: [persist-settings] Error 2 (ignored)
CC adlist.o
/bin/sh: cc: command not found
make[1]: *** [adlist.o] Error 127
make[1]: Leaving directory `/tmp/redis-stable/src'
make: *** [all] Error 2
How can I fix it?
| You are trying to install redis from source code. What this process do is to compile and create executable on your machine and then install it. For doing this you need various tools like gcc etc. Best way is to install all of them together by installing that group. Run this from terminal
yum grouplist
This will show all groups available and then choose group you want to install or run directly
yum groupinstall 'Development Tools'
This will save you from other problems which might come in future while installing from source.
| Redis | 30,692,708 | 25 |
I am looking around redis to provide me an intermediate cache storage with a lot of computation around set operations like intersection and union.
I have looked at the redis website, and found that the redis is not designed for a multi-core CPU. My question is, Why is it so ?
Also, if yes, how can we make 100% utilization of CPU resources with redis on a multi core CPU's.
|
I have looked at the redis website, and found that the redis is not designed for a multi-core CPU. My question is, Why is it so?
It is a design decision.
Redis is single-threaded with epoll/kqueue and scales indefinitely in terms of I/O concurrency. --@antirez (creator of Redis)
A reason for choosing an event-driven approach is that synchronization between threads comes at a cost in both the software (code complexity) and the hardware level (context switching). Add to this that the bottleneck of Redis is usually the network or the *memory, not the CPU. On the other hand, a single-threaded architecture has its own benefits (for example the guarantee of atomicity).
Therefore event loops seem like a good design for an efficient & scalable system like Redis.
Also, if yes, how can we make 100% utilization of CPU resources with
redis on a multi core CPU's.
The Redis approach to scale over multiple cores is sharding, mostly together with Twemproxy.
However if for some reason you still want to use a multi-threaded approach, take a look at Thredis but make sure you understand the implications of what its author did (you can not use it as a replication master, for instance).
| Redis | 21,304,947 | 25 |
I am currently using django with celery and everything works fine.
However I want to be able to give the users an opportunity to cancel a task if the server is overloaded by checking how many tasks are currently scheduled.
How can I achieve this ?
I am using redis as broker.
I just found this :
Retrieve list of tasks in a queue in Celery
It is somehow relate to my issue but I don't need to list the tasks , just count them :)
| Here is how you can get the number of messages in a queue using celery that is broker-agnostic.
By using connection_or_acquire, you can minimize the number of open connections to your broker by utilizing celery's internal connection pooling.
celery = Celery(app)
with celery.connection_or_acquire() as conn:
conn.default_channel.queue_declare(
queue='my-queue', passive=True).message_count
You can also extend Celery to provide this functionality:
from celery import Celery as _Celery
class Celery(_Celery)
def get_message_count(self, queue):
'''
Raises: amqp.exceptions.NotFound: if queue does not exist
'''
with self.connection_or_acquire() as conn:
return conn.default_channel.queue_declare(
queue=queue, passive=True).message_count
celery = Celery(app)
num_messages = celery.get_message_count('my-queue')
| Redis | 18,631,669 | 25 |
I would like to export a subset of my Redis data on the slave to a csv file. I notice a new csv output option was added to redis-cli but I am unable to find documentation of how it works. Enabling the option prints the command outputs to screen in csv format. What is the best way to get this into a csv file?
| Cutting edge!
I've just looked at the source code & all it does is output the commands as comma separated values to stdout. Which is no big surprise.
So you could just redirect it to a file, in the standard way, as long as you're on Linux?
e.g./
redis-cli --csv your-command > stdout.csv 2> stderr.txt
| Redis | 11,368,615 | 25 |
I have been using heroku redis for a while now on one of my side projects. I currently use it for 3 things
It serves as a place for me to store firebase certificates
It is used for caching data on the site
It is used for rails sidekiq jobs
Recently, my heroku usage went up and I had to change it to use heroku redis premium plan. Ever since then I have been getting error: SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain) somehow. Everything stayed the same yet the error started popping out of nowhere.
Does heroku-redis premium plan work fundamentally different than a basic heroku-redis plan?
I am using ruby on rails, deployed on heroku with heroku redis if that helps.
| According to Heroku's docs
You need to
Create an initializer file named config/initializers/redis.rb
containing:
$redis = Redis.new(url: ENV["REDIS_URL"], ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE })
Also if you are having this issue while attempting to use sidekiq:
Create an initializer file named config/initializers/sidekiq.rb containing:
Sidekiq.configure_server do |config|
config.redis = { ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } }
end
Sidekiq.configure_client do |config|
config.redis = { ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } }
end
| Redis | 66,246,528 | 24 |
How can I browse all the pending jobs within my Redis queue so that I could cancel the Mailable that has a certain emailAddress-sendTime pair?
I'm using Laravel 5.5 and have a Mailable that I'm using successfully as follows:
$sendTime = Carbon::now()->addHours(3);
Mail::to($emailAddress)
->bcc([config('mail.supportTeam.address'), config('mail.main.address')])
->later($sendTime, new MyCustomMailable($subject, $dataForMailView));
When this code runs, a job gets added to my Redis queue.
I've already read the Laravel docs but remain confused.
How can I cancel a Mailable (prevent it from sending)?
I'd love to code a webpage within my Laravel app that makes this easy for me.
Or maybe there are tools that already make this easy (maybe FastoRedis?)? In that case, instructions about how to achieve this goal that way would also be really helpful. Thanks!
Update:
I've tried browsing the Redis queue using FastoRedis, but I can't figure out how to delete a Mailable, such as the red arrow points to here:
UPDATE:
Look at the comprehensive answer I provided below.
| Make it easier.
Don't send an email with the later option. You must dispatch a Job with the later option, and this job will be responsible to send the email.
Inside this job, before send the email, check the emailAddress-sendTime pair. If is correct, send the email, if not, return true and the email won't send and the job will finish.
| Redis | 48,255,735 | 24 |
I am creating a node API using javascript. I have used redis as my key value store.
I created a redis-client in my app and am able to get values for perticular key.
I want to retrieve all keys along with their values.
So Far I have done this :
app.get('/jobs', function (req, res) {
var jobs = [];
client.keys('*', function (err, keys) {
if (err) return console.log(err);
if(keys){
for(var i=0;i<keys.length;i++){
client.get(keys[i], function (error, value) {
if (err) return console.log(err);
var job = {};
job['jobId']=keys[i];
job['data']=value;
jobs.push(job);
});
}
console.log(jobs);
res.json({data:jobs});
}
});
});
but I always get blank array in response.
is there any way to do this in javascript?
Thanks
| First of all, the issue in your question is that, inside the for loop, client.get is invoked with an asynchronous callback where the synchronous for loop will not wait for the asynchronous callback and hence the next line res.json({data:jobs}); is getting called immediately after the for loop before the asynchronous callbacks. At the time of the line res.json({data:jobs}); is getting invoked, the array jobs is still empty [] and getting returned with the response.
To mitigate this, you should use any promise modules like async, bluebird, ES6 Promise etc.
Modified code using async module,
app.get('/jobs', function (req, res) {
var jobs = [];
client.keys('*', function (err, keys) {
if (err) return console.log(err);
if(keys){
async.map(keys, function(key, cb) {
client.get(key, function (error, value) {
if (error) return cb(error);
var job = {};
job['jobId']=key;
job['data']=value;
cb(null, job);
});
}, function (error, results) {
if (error) return console.log(error);
console.log(results);
res.json({data:results});
});
}
});
});
But from the Redis documentation, it is observed that usage of
Keys are intended for debugging and special operations, such as
changing your keyspace layout and not advisable to production
environments.
Hence, I would suggest using another module called redisscan as below which uses SCAN instead of KEYS as suggested in the Redis documentation.
Something like,
var redisScan = require('redisscan');
var redis = require('redis').createClient();
redisScan({
redis: redis,
each_callback: function (type, key, subkey, value, cb) {
console.log(type, key, subkey, value);
cb();
},
done_callback: function (err) {
console.log("-=-=-=-=-=--=-=-=-");
redis.quit();
}
});
| Redis | 42,926,990 | 24 |
I'm trying to access Redis server through the code and it's not connecting. But if i bash to the redis container i can access the redis-cli.
docker-compose.yml looks like this
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile_nginx
ports:
- "9000:80"
environment:
- NGINX_SERVERNAME=xxx.dev *.xxx.dev
command: /bin/bash -c "envsubst '$$NGINX_SERVERNAME' < /var/www/site.template > /etc/nginx/conf.d/default.conf
&& dos2unix /var/www/provision/init_storage.sh && sh /var/www/provision/init_storage.sh
&& nginx -g 'daemon off;'"
volumes:
- .:/var/www
links:
- php
networks:
frontend
php:
build:
context: .
dockerfile: Dockerfile_php-fpm
command: /bin/bash -c "composer install
&& php-fpm"
volumes:
- .:/var/www
environment:
- APP_ENV=local
- APP_DEBUG=true
networks:
- frontend
- backend
links:
- redis
db:
build:
context: .
dockerfile: Dockerfile_mariadb
volumes:
- ./initdb:/docker-entrypoint-initdb.d
ports:
- "3309:3306"
environment:
MYSQL_ROOT_PASSWORD: xxxx
MYSQL_DATABASE: xxxx
networks:
- backend
redis:
build:
context: .
dockerfile: Dockerfile_redis
ports:
- "6379:6379"
networks:
frontend:
driver: bridge
backend:
driver: bridge
Dockerfile_redis
FROM redis:latest
When i try to connect to the redis server using this code
$redis = new \Redis();
try {
$redis->connect('127.0.0.1', 6379);
} catch (\Exception $e) {
var_dump($e->getMessage()) ;
die;
}
It gives this warning
Warning: Redis::connect(): connect() failed: Connection refused
Does anyone know how to connect Redis container to PHP container ?
| Your Problem
Docker Compose creates separated docker container for different services. Each container are, logically speaking, like different separated computer servers that only connected with each other through docker network.
Consider each boxes in this diagram as an individual computer, then this is practically what you have:
+----------------------------------------------------------+
| your machine |
+----------------------------------------------------------+
|
+------ (virtual network by docker) -------+
| | |
+-----------------+ +-------------------+ +----------------+
| "php" container | | "redis" container | | "db" container |
+-----------------+ +-------------------+ +----------------+
Your PHP container doesn't see any redis in "localhost" because there is no redis in it. Just like it would't see any MySQL in "localhost". Your redis is running in the "redis" container. Your MySQL is running in your "db" container.
The things that confuses you is the port binding directives (i.e. ports in this definition):
redis:
build:
context: .
dockerfile: Dockerfile_redis
ports:
- "6379:6379"
The port 6379 of the "redis" container is binded to your computer, but to your computer ONLY. Other container doesn't have the same access to the port bindings. So even your computer can connect it with '127.0.0.1:6379', the php container cannot do the same.
Solution
As described in Networking in Docker Compose, each docker compose container can access other container by using the service name as hostname. For example, your programming running by service php can access your MySQL service with the hostname db.
So you should connect redis with its hostname redis
$redis = new \Redis();
try {
$redis->connect('redis', 6379);
} catch (\Exception $e) {
var_dump($e->getMessage()) ;
die;
}
| Redis | 42,360,356 | 24 |
I am working on the project which need to broadcast latitude and longitude on realtime
I have something like below
namespace App\Events;
use App\Events\Event;
use Illuminate\Queue\SerializesModels;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Support\Facades\Redis;
class TrackersBroadcast extends Event implements ShouldBroadcast
{
public $lat, $lng,$imei,$date_time
use SerializesModels;
/**
* Create a new event instance.
*
* @return void
*/
public function __construct(
$lat,
$lng,
$imei,
$date_time
)
{
$this->lat = $lat;
$this->lng = $lng;
$this->imei = $imei;
$this->date_time = $date_time;
}
/**
* Get the channels the event should be broadcast on.
*
* @return array
*/
public function broadcastOn()
{
return ['tracker-channel'];
}
}
In some case I need to trigger real time email , so I decided to implement laravel message queue like below
namespace App\Jobs;
use App\Jobs\Job;
use Illuminate\Queue\SerializesModels;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldQueue;
use Mail;
class SendAlertEmail extends Job implements ShouldQueue
{
use InteractsWithQueue, SerializesModels;
public $data;
/**
* Create a new job instance.
*
* @return void
*/
public function __construct($data)
{
$this->data=$data;
}
/**
* Execute the job.
*
* @return void
*/
public function handle()
{
//
Mail::send('emails.test', ['testVar' => $this->data], function($message) {
$message->from('[email protected]', 'Website Name');
$message->to('[email protected]')->subject('A simple test');
});
}
}
whenever I run php artisan queue:listen database it will queue my broadcasting event too . I dont want to queue the broadcast event . How to do that?
| Because Laravel Event Broadcasting queued by default if you extend ShouldBroadcast interface. If you don't want Event Broadcasting queued, you should extend ShouldBroadcastNow interface.
use Illuminate\Contracts\Broadcasting\ShouldBroadcastNow;
class TrackersBroadcast implements ShouldBroadcastNow
{
......
}
So It means your Event Broadcasting will using sync queue driver.
| Redis | 41,390,753 | 24 |
How can i change the name of databases in redis? Example:
Db01
key01
key02
Db02
key01
key02
Db03
key01
key02
i want to change db01 name or db02, db03 to other names
| Redis databases are identified by an integer index, there is no database name.
By default there are 16 databases, indexed from 0 to 15.
Check the following article: https://www.digitalocean.com/community/cheatsheets/how-to-manage-redis-databases-and-keys
| Redis | 35,931,043 | 24 |
I am looking for a very simple starter C# application for using StackExchange.Redis
I have search over the web and found StackExchange.Redis
But this doesn't seems like a quick startup example.
I have setup redis on windows using
StackExchange.Redis exe
Can anyone help me locate a simple C# application connecting with redis server and setting and getting some keys.
| You can find C# examples in the readme file.
using StackExchange.Redis;
...
ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost");
// ^^^ store and re-use this!!!
IDatabase db = redis.GetDatabase();
string value = "abcdefg";
db.StringSet("mykey", value);
...
string value = db.StringGet("mykey");
Console.WriteLine(value); // writes: "abcdefg"
| Redis | 32,888,513 | 24 |
Is there any way to Remove all Redis Client Connections with one command?
I know that it's possible to remove by IP:PORT
CLIENT KILL addr:port
Also I found that is possible to do this since Redis 2.8.12.
But
I couldn't find anything about.
| CLIENT KILL can receive TYPE argument that can be one of a three connection types; normal, slave and pubsub.
You can kill all open connections by sending the following three commands:
CLIENT KILL TYPE normal
CLIENT KILL TYPE slave
CLIENT KILL TYPE pubsub
Note that you can skip the later two if you do not use them (slave and pubsub connections).
You can also add a SKIPME no for a kamikaze connections killer.
| Redis | 30,790,748 | 24 |
I'd like to view the time of most recent access for a specific key on my redis server.
I know that this information is stored for each key because it is used in redis's LRU algorithm for eliminating old keys.
Is there an easy way to see this information for a given key?
| You can use the OBJECT IDLETIME command for this purpose. It returns the number of seconds since the key was accessed, but If you need the time just subtract the reply from now().
| Redis | 26,245,623 | 24 |
AWS has Redis support via the ElastiCache Service. My question is, can I connect to Redis on AWS Elasticache from node, using the following:
var client = require('redis').createClient(6379, 'elastichache endpoint string', {no_ready_check: true});
Or do I have to use the NodeJS AWS SDK?
I realize I could set up my own Redis Server on an EC2 instance, but I want to use ElastiCache. Is this possible?
| Yes, this is a common use case. You can connect directly to redis without using the SDK. Just make sure you have configured the security group correctly to allow access from your app server.
| Redis | 22,432,818 | 24 |
Suppose my database table structure is like this
id name college address
1 xxx nnn xn
2 yyy nnm yn
3 zzz nnz zn
If i want to get the student details based on the name in sql like this
select * from student where name = 'xxx'
so how its is possible in redis database
| Redis, like other NoSQL datastores, has different requirements based on what you are going to be doing.
Redis has several data structures that could be useful depending on your need. For example, given your desire for a select * from student where name = 'xxx' you could use a Redis hash.
redis 127.0.0.1:6379> hmset student:xxx id 1 college nnn address xn
OK
redis 127.0.0.1:6379> hgetall student:xxx
1) "id"
2) "1"
3) "college"
4) "nnn"
5) "address"
6) "xn"
If you have other queries though, like you want to do the same thing but select on where college = 'nnn' then you are going to have to denormalize your data. Denormalization is usually a bad thing in SQL, but in NoSQL it is very common.
If your primary query will be against the name, but you may need to query against the college, then you might do something like adding a set in addition to the hashes.
redis 127.0.0.1:6379> sadd college:nnn student:xxx
(integer) 1
redis 127.0.0.1:6379> smembers college:nnn
1) "student:xxx"
With your data structured like this, if you wanted to find all information for names going to college xn, you would first select the set, then select each hash based on the name returned in the set.
Your requirements will generally drive the design and the structures you use.
| Redis | 21,347,437 | 24 |
An inspection of currently running Celery tasks reveals a weird time_start timestamp:
>> celery.app.control.inspect().active()
{u'[email protected]': [{u'acknowledged': True,
u'args': u'(...,)',
u'delivery_info': {u'exchange': u'celery',
u'priority': 0,
u'redelivered': None,
u'routing_key': u'celery'},
u'hostname': u'[email protected]',
u'id': u'3d92fdfd-524e-4ba1-98cb-cf83af2ad8e9',
u'kwargs': u'{}',
u'name': u'task_name',
u'time_start': 9636801.218162088,
u'worker_pid': 7931}]}
The time_start attribute dates the task back to 1970 (that's before the creation of Celery, Python, and I don't own a customised DeLorean):
>> from datetime import datetime
>> datetime.fromtimestamp(9636801.218162088)
datetime.datetime(1970, 4, 22, 13, 53, 21, 218162)
Am I misinterpreting the time_task attribute? Is my Celery app misconfigured?
I am using Celery 3.1.4 on Linux with a Django app and a Redis backend.
Tasks are run by a worker that is executed as follows:
./manage.py celery worker --loglevel=INFO --soft-time-limit=600 --logfile=/tmp/w1.log --pidfile=/tmp/w1.pid -n 'w1.%%h'
| I found the answer to my own question by digging in the Celery and Kombu code: the time_start attribute of a task is computed by the kombu.five.monotonic function. (Ironically, the kombu code also refers to another StackOverflow question for reference) The timestamp returned by that function refers to a "monotonic" time computed by the clock_gettime system call.
As explained in the clock_gettime documentation, this monotonic time represents the time elapsed "since some unspecified starting point". The purpose of this function is to make sure that time increases monotonically, despite changes of other clock values.
Thus, in order to obtain the real datetime at which the task was started, we just need to compare the time_start attribute to the current value of the monotonic clock:
>> from datetime import datetime
>> from time import time
>> import kombu.five
>> datetime.fromtimestamp(time() - (kombu.five.monotonic() - 9636801.218162088))
datetime.datetime(2013, 11, 20, 9, 55, 56, 193768)
EDIT: the time_start attribute reported by inspection is no longer monotonic : https://github.com/celery/celery/pull/3684 And it only took me four years to write a proper pull request 0:-)
| Redis | 20,091,505 | 24 |
I wanted to make some changes in redis.conf, so that whenever i type redis-cli it connects me to redis installed on remote server.
I know that we can connect to redis installed on remote server by :
redis-cli -h 'IP-Address-Of-Server'.
But actually, I have some bash scripts and in those scripts i have used redis-cli at many place. So instead of replacing redis-cli with redis-cli -h 'IP-Address-Of-Server' in each file, I wanted to somehow change redis configuration, so that by default it connects me to the remote server. I hope it make sense :)
| there is no good reason to touch redis conf for this.
just make a script that wraps redis-cli with the desired parameters to connect to the remote host
eg. create a redis-cli-remotename.sh
#!/bin/sh
redis-cli -h remote.host_name
and give it +x permissions (eg. chmod +x redis-cli-remotename.sh)
| Redis | 16,817,996 | 24 |
I am using node.js to write a web service, it calls an API for some data but I am limited by the API to a number of calls per month, so I wish to cache the data I retrieve from the API so I can serve it up with the cached data, and re-fetch the data from the API at a timed interval.
Is this a good approach for this problem? And what caching framework should I use? I looked at node-redis but I don't think a key value store is appropriate for the data.
Thanks!
| I would disagree with you regarding Redis. Redis is a very powerful key-value store that can easily be used for what you want. It is designed to have stuff dumped in it and taken out again. In your situation, you can easy cache the API response by saving it into Redis with the query as the key (if this is a REST API you're calling, you could just use the URL or serialized data as the key) and simply cache the response as a stringified JSON object (or XML string if you happen to be getting that).
You can also set an expiry on the cached data, and it will be cleared when the time is expired.
You could then wrap your API call in a helper function which checks the cache, and returns the value if it's present. If it's not it makes the API request, adds it to the cache, then returns it.
This is probably the most straightforward solution and seems to cover your use case pretty well.
| Redis | 15,607,180 | 24 |
I've been considering using Redis in a project that uses a lot of writes.
So, after setting it up on my server, I created a simple PHP script using Predis with a simple loop to add records. I also created a second script that does a similar thing, only on a MySQL table (InnoDB) using PHP's MySQLi.
I ran a 10k loop, a 100k loop and a 500k loop and MySQL beat Redis every single time. In fact, the more records I added, the faster MySQL was compared to Redis.
There's so much buzz (hype?) around Redis, that I want to believe I'm missing something here.
Please educate me :)
Thanks!
Here's my Predis code:
for ($i=0; $i<100000; $i++) {
$predis->set('key'.$i, $i);
}
Here's my MySQLi code:
for ($i=0; $i<100000; $i++) {
mysqli_query($db, "INSERT INTO test (`key`, `value`) VALUES ('key$i', $i)");
}
| Comparing predis to mysqli is inappropriate
the mysqli extension - is an extension whereas predis is a php-client library. I.e. whereas mysqli is compiled code, predis is just plain php - extensions are faster.
A benchmark of the kind shown in the question primarily shows the performance loss of PHP code versus an extension.
Compare like with like
If you want to make a comparison of write performance - you'll need to compare to the php redis extension.
| Redis | 13,126,431 | 24 |
I have been using Dalli until now for caching and today I came across Redis -Store.
I am wondering should I switch to redisstore. My app already uses redis for certain stuff so I have a redis server which is quite big(in terms of resources) and I also have another memcached server. So if I where to switch to redis-store it would mean that I can remove the memcached server(less server to maintain + less cost).
Has anyone done a comparison of these 2 solutions.
Performance
Is it a drop-in replacement(can I switch between these 2 anytime without code change)
Any other stuff I should know about.
| Redis can be used as a cache or as a permanent store, but if you try to mix both, you can end up having "interesting issues".
When you have memcached, yo have a maximum amount of memory for the process, so when memcached gets full it will automatically remove the least recently used entries to make room for the new entries.
You can configure Redis to have that behaviour too, but you don't want to do that if you are using Redis for persistent storage, because in that case you would potentially lose keys that are meant to be persistent.
So if you are using persistent storage for Redis, you would need to have two different Redis processes: one for your persistant keys, one for caching. Of course you could always have only one process and set expiring times to every cache item, but no one would assure you you don't hit the memory limit before they expire and you lose data, so in practice you would need two processes. Besides, if you are setting a master/slave configuration for your persistent data and you store cache on the same server, you are basically wasting RAM, so separate processes are the way to go.
About performance, both redis and memcached are VERY performant, and on different tests they are on the same range when it comes to get/extract data, but memcached is better when you only need a cache.
Why is this so? First of all, since memcached only has one mission, which is storing key/values, it doesn't have any overhead when it comes to storing metadata. Redis on the other hand offers different data structures, so it stores more metadata which each key. One example of this: it's much "cheaper" to store data on a hash in Redis instead of using individual keys. You don't get any of this on memcached, since there is only one type of data. This means with the same amount of memory in your servers you can store more data on memcached than on redis. If you have a relatively small installation you don't really care, but the moment you start seeing growth, believe me you will want to keep those data under control.
So, as much as I like Redis, I prefer to have memcached for my caching needs and redis for my persistent storage/temporary storage/queue needs. I still use redis as a "cache" but not a temporary one with expiration, but as a lookup cache to save reading from a more expensive storage. For example, I keep a mapping between user IDs and nicknames on Redis. I never expire these mappings, so Redis is a perfect place for it.
In the case that you are dealing with a small amount of data, then it might make sense your idea of having a single technology for everything, but the moment you start growing over a few hundreds MB, I would say go with both of them.
| Redis | 11,076,902 | 24 |
When using redis, it gives me the error:
ERR command not allowed when used memory > 'maxmemory'
The info command reveals:
redis 127.0.0.1:6379> info
redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:kqueue
gcc_version:4.2.1
process_id:1881
uptime_in_seconds:116
uptime_in_days:0
lru_clock:1222663
used_cpu_sys:0.04
used_cpu_user:0.04
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:930912
used_memory_human:909.09K
used_memory_rss:1269760
used_memory_peak:931408
used_memory_peak_human:909.58K
mem_fragmentation_ratio:1.36
mem_allocator:libc
loading:0
aof_enabled:0
changes_since_last_save:4
bgsave_in_progress:0
last_save_time:1333432389
bgrewriteaof_in_progress:0
total_connections_received:1
total_commands_processed:2
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
vm_enabled:0
role:master
Is the used_memory high? I'm a complete redis noob. If so, how does this problem occur and how should I proceed from here? This same error is all occurring in production (Heroku), so any help is greatly appreciated. Thank you.
| This message is returned when maxmemory limit has been reached.
You can check what the current limit is by using the following command:
redis 127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "128000000"
The result is in bytes.
Please note an empty Redis instance uses about 710KB of memory (on Linux), so if you plan to store only 1MB of useful data and enforce this limit, then you need to set 1734K in maxmemory parameter. In the configuration file, the maxmemory setting is in bytes, except if you use a K,M,G suffix.
Redis stores everything in memory (it never spills data on the disk), so all the content of your Resque queues has to fit. A few MB seem very low for a Resque engine.
You did not specify which Heroku option you selected, but my understanding is Redis To Go "nano" option (the free one) limit is 5 MB.
| Redis | 9,987,832 | 24 |
Eventhough redis and message queueing software are usually used for different purposes, I would like to ask pros and cons of using redis for the following use case:
group of event collectors write incoming messages as key/value . consumers fetch and delete processed keys
load starting from 100k msg/s and going beyond 250k in short period of time (like months) target is to achieve million msg/s
persistency is not strictly required. it is ok to lose non-journaled messages during failure
performance is very important (so, the number of systems required to handle load)
messages does not have to be processed in the order they arrive
do you know such use cases where redis chosen over traditional message queueing software ? or would you consider something else ?
note: I have also seen this but did not help:
Real-time application newbie - Node.JS + Redis or RabbitMQ -> client/server how?
thanks
| Given your requirements I would try Redis. It will perform better than other solutions and give you much finer grained control over the persistence characteristics. Depending on the language you're using you may be able to use a sharded Redis cluster (you need Redis bindings that support consistent hashing -- not all do). This will let you scale out to the volume you indicated. I've seen 10k/sec on my laptop in some basic tests.
You'll probably want to use the list operations in Redis (LPUSH for writes, BRPOP for reads) if you want queue semantics.
I have a former client that deployed Redis in production as a message queue last spring and they've been very happy with it.
| Redis | 7,506,118 | 23 |
Production environment is on Azure, using Redis Cache Standard 2.5GB.
Example 1
System.Web.HttpUnhandledException (0x80004005): Exception of type
'System.Web.HttpUnhandledException' was thrown. --->
StackExchange.Redis.RedisTimeoutException: Timeout performing SETNX
User.313123, inst: 49, mgr: Inactive, err: never, queue: 0, qu: 0, qs:
0, qc: 0, wr: 0, wq: 0, in: 0, ar: 0, clientName: PRD-VM-WEB-2,
serverEndpoint: Unspecified/Construct3.redis.cache.windows.net:6380,
keyHashSlot: 15649, IOCP: (Busy=0,Free=1000,Min=1,Max=1000), WORKER:
(Busy=1,Free=32766,Min=1,Max=32767) (Please take a look at this
article for some common client-side issues that can cause timeouts:
http://stackexchange.github.io/StackExchange.Redis/Timeouts) at
StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message
message, ResultProcessor1 processor, ServerEndPoint server) in
c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\ConnectionMultiplexer.cs:line
2120 at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message,
ResultProcessor1 processor, ServerEndPoint server) in
c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\RedisBase.cs:line
81
Example 2
StackExchange.Redis.RedisTimeoutException: Timeout performing GET
ForumTopic.33831, inst: 1, mgr: Inactive, err: never, queue: 2, qu: 0,
qs: 2, qc: 0, wr: 0, wq: 0, in: 0, ar: 0, clientName: PRD-VM-WEB-2,
serverEndpoint: Unspecified/Construct3.redis.cache.windows.net:6380,
keyHashSlot: 5851, IOCP: (Busy=0,Free=1000,Min=1,Max=1000), WORKER:
(Busy=1,Free=32766,Min=1,Max=32767) (Please take a look at this
article for some common client-side issues that can cause timeouts:
http://stackexchange.github.io/StackExchange.Redis/Timeouts) at
StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message
message, ResultProcessor1 processor, ServerEndPoint server) in
c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\ConnectionMultiplexer.cs:line
2120 at StackExchange.Redis.RedisBase.ExecuteSync[T](Message
message, ResultProcessor1 processor, ServerEndPoint server) in
c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\RedisBase.cs:line
81 at StackExchange.Redis.RedisDatabase.StringGet(RedisKey key,
CommandFlags flags) in
c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\RedisDatabase.cs:line
1647 at
C3.Code.Controls.Application.Caching.Distributed.DistributedCacheController.Get[T](String
cacheKey) in
C:\Construct.net\Source\C3Alpha2\Code\Controls\Application\Caching\Distributed\DistributedCacheController.cs:line
115 at
C3.Code.Controls.Application.Caching.Manager.Manager.Get[T](String
key, Func`1 getFromExternFunction, Boolean skipLocalCaches) in
C:\Construct.net\Source\C3Alpha2\Code\Controls\Application\Caching\Manager\Manager.cs:line
159 at C3.PageControls.Forums.TopicRender.Page_Load(Object sender,
EventArgs e) in
C:\Construct.net\Source\C3Alpha2\PageControls\Forums\TopicRender.ascx.cs:line
40 at System.Web.UI.Control.OnLoad(EventArgs e) at
System.Web.UI.Control.LoadRecursive() at
System.Web.UI.Control.LoadRecursive() at
System.Web.UI.Control.LoadRecursive() at
System.Web.UI.Control.LoadRecursive() at
System.Web.UI.Control.LoadRecursive() at
System.Web.UI.Control.LoadRecursive() at
System.Web.UI.Control.LoadRecursive() at
System.Web.UI.Page.ProcessRequestMain(Boolean
includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
These errors are sporadic, several times a day.
Is this an Azure network blip, or something I can reduce? Looking at the numbers in the error doesn't seem anything out of the ordinary, and the server load never seems to go above 7% as reported by Azure.
Redis connection
internal static class RedisController
{
private static readonly object GetConnectionLock = new object();
public static ConnectionMultiplexer GetConnection()
{
if (Global.RedisConnection == null)
{
lock (GetConnectionLock)
{
if (Global.RedisConnection == null)
{
Global.RedisConnection = ConnectionMultiplexer.Connect(
Settings.Deployment.RedisConnectionString);
}
}
}
return Global.RedisConnection;
}
| There are 3 scenarios that can cause timeouts, and it is hard to know which is in play:
the library is tripping over; in particular, there are known issues relating to the TLS implementation and how we handle the read loop in the v1.* version of the library - something that we have invested a lot of time working on for v2.* (however: it is not always trivial to update to v2, especially if you're using the library as part of other code that depend on a specific version)
the server/network is tripping over; this is a very real possibility - looking at "slowlog" can help if it is server-side, but I don't have any visibility of that
the server and network are fine, and the library is doing what it can, but there are some huge blobs flying between client and server that are delaying other operations; this is something that I'm making changes to help identify right now, and if this shows itself to be a common problem, we'll perhaps look at making better use of concurrent connections (which doesn't increase bandwidth, but can reduce latency for blocked operations) - this would be a v2 only change, note
| Redis | 51,651,796 | 23 |
1167:M 26 Apr 13:00:34.666 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
1167:M 26 Apr 13:00:34.667 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
1167:M 26 Apr 13:00:34.667 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
1167:M 26 Apr 13:00:34.685 # Creating Server TCP listening socket 192.34.62.56:6379: Name or service not known
1135:M 26 Apr 20:34:24.308 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
1135:M 26 Apr 20:34:24.309 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
1135:M 26 Apr 20:34:24.309 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
1135:M 26 Apr 20:34:24.330 # Creating Server TCP listening socket 192.34.62.56:6379: Name or service not known
| Well, it's a bit late for this post, but since I just spent a lot of time(the whole night) to configure a new redis server 3.0.6 on ubuntu 16.04. I think I should just write down how I do it so others don't have to waste their time...
For a newly installed redis server, you are probably going to see the following issues in redis log file which is /var/log/redis/redis-server.log
Maximum Open Files
3917:M 16 Sep 21:59:47.834 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
3917:M 16 Sep 21:59:47.834 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
3917:M 16 Sep 21:59:47.834 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
I have seen a lot of posts telling you to modify
/etc/security/limits.conf
redis soft nofile 10000
redis hard nofile 10000
or
/etc/sysctl.conf
fs.file-max = 100000
That might work in ubuntu 14.04, but it certainly not works in ubuntu 16.04. I guess it has something to do with changing from upstart to systemd, but I am no expert of linux kernel!
To fix this you have to do it the systemd way
/etc/systemd/system/redis.service
[Service]
...
User=redis
Group=redis
# should be fine as long as you add it under [Service] block
LimitNOFILE=65536
...
Then you must daemon reload and restart the service
sudo systemctl daemon-reload
sudo systemctl restart redis.service
To check if it works, try to cat proc limits
cat /run/redis/redis-server.pid
cat /proc/PID/limits
and you will see
Max open files 65536 65536 files
Max locked memory 65536 65536 bytes
At this stage, the maximum open file is solved.
Socket Maximum Connection
2222:M 16 Sep 20:38:44.637 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Memory Overcommit
2222:M 16 Sep 20:38:44.637 # Server started, Redis version 3.0.6
2222:M 16 Sep 20:38:44.637 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Since these two are related, we will solve it at once.
sudo vi /etc/sysctl.conf
# Add at the bottom of file
vm.overcommit_memory = 1
net.core.somaxconn=1024
Now for these configs to work, you need to reload the config
sudo sysctl -p
Transparent Huge Pages
1565:M 16 Sep 22:48:00.993 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
To permanently solve this, follow the log's suggestion, and modify rc.local
sudo vi /etc/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
This require you to reboot, backup your data or do anything you need before you actually do it!!
sudo reboot
Now check you redis log again, you should have a redis server without any errors or warnings.
| Redis | 36,880,321 | 23 |
I have an ASP.NET MVC application that runs on server A and some web services that run on server B. I have implemented real-time notifications for which I have used SignalR on server A. But now I need server B to also be able to send messages to a View served from server A (the main web application). Hence, I am trying the tutorial here to involve Redis backplane.
In my startup in server A, I have added the following:
GlobalHost.DependencyResolver.UseRedis("localhost", 6379, string.Empty, "abc");
app.MapHubs();
Here, I assume that "myApp" indicates the channel and when I run publish abc "hello world" on the Redis console, I can see the subscriber count returned as 1, but I am not able to figure out how a SignalR hub interacts with the channel. Where do I receive the message on the server/view? Can we subscribe to only one redis channel? Can't we dynamically configure to subscribe to a particular channel?
EDIT: I can see messages sent from chat Application implemented using SignalR on redis console if I subscribe to abc.
Also for now I have implemented my own redis listener on server A which in receiving a message from redis channel, calls the signalR hub function. I am sure there must be a different way to do this and I am hoping redis backplane can help me but unsure how it works.
| Backplane distributes messages between servers.
GlobalHost.DependencyResolver.UseRedis("localhost", 6379, string.Empty, "abc");
Here, abc is the redis channel, that means whichever server is connected to redis server with this channel, they will share messages. SignalR channel (group) is different than Redis channel. You can share also SignalR channel (group) messages.
Then just install the Microsoft.AspNet.SignalR.Redis NuGet to your servers.
Connect your servers to Redis like this:
GlobalHost.DependencyResolver.UseRedis("server", port, "password", "AppName");
app.MapSignalR();
Then, use your signalr as before. You don't have to do anything else.
When Server A sends a message to the clients, it will send the message first to Redis. Then Redis will share the message with all subscribers (servers A and B). Then, A and B will send the message to their clients. (Also viceversa is true, it will be same for if B sends a message).
Let's say A sends a message to the clients. _context.Clients.All.TestMessage("Hello");
This will go first to redis and redis will share this with A and B.
Then both A an B will send this message to their clients.
_context.Clients.All.TestMessage("Hello");
But you don't have to worry about these kind of things. I said before. Install package, conntect your servers to redis and use signalr as before.
If we come in your question. The answer is Yes. Server B can send messages to server A clients by Signalr Backplane.
This image summarizes what I told:
| Redis | 36,161,600 | 23 |
I have a web application using Django and i am using Celery for some asynchronous tasks processing.
For Celery, i am using Rabbitmq as a broker, and Redis as a result backend.
Rabbitmq and Redis are running on the same Ubuntu 14.04 server hosted on a local virtual machine.
Celery workers are running on remote machines (Windows 10) (no worker are running on the Django server).
i have three issues (i think they are related somehow !).
The tasks stay in the 'PENDING' state no matter if the tasks are succeeded or failed.
the tasks doesn't retry when failed. and i get this error when trying to retry :
reject requeue=False: [WinError 10061] No connection could be made
because the target machine actively refused it
The results backend doesn't seems to work.
i am also confused about my settings, and i don't know exactly where this issues might come from !
so here is my settings so far:
my_app/settings.py
# region Celery Settings
CELERY_CONCURRENCY = 1
CELERY_ACCEPT_CONTENT = ['json']
# CELERY_RESULT_BACKEND = 'redis://:C@pV@[email protected]:6379/0'
BROKER_URL = 'amqp://soufiaane:C@pV@[email protected]:5672/cvcHost'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ACKS_LATE = True
CELERYD_PREFETCH_MULTIPLIER = 1
CELERY_REDIS_HOST = 'cvc.ma'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
CELERY_RESULT_BACKEND = 'redis'
CELERY_RESULT_PASSWORD = "C@pV@lue2016"
REDIS_CONNECT_RETRY = True
AMQP_SERVER = "cvc.ma"
AMQP_PORT = 5672
AMQP_USER = "soufiaane"
AMQP_PASSWORD = "C@pV@lue2016"
AMQP_VHOST = "/cvcHost"
CELERYD_HIJACK_ROOT_LOGGER = True
CELERY_HIJACK_ROOT_LOGGER = True
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
# endregion
my_app/celery_settings.py
from __future__ import absolute_import
from django.conf import settings
from celery import Celery
import django
import os
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
django.setup()
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@[email protected]/cvcHost', backend='redis://:C@pV@[email protected]:6379/0')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
my_app__init__.py
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery_settings import app as celery_app
my_app\email\tasks.py
from __future__ import absolute_import
from my_app.celery_settings import app
# here i only define the task skeleton because i'm executing this task on remote workers !
@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
try:
print("x")
except Exception as exc:
self.retry(exc=exc)
on the workers side i have one file 'tasks.py' which have the actual implementation of the task:
Worker\tasks.py
from __future__ import absolute_import
from celery.utils.log import get_task_logger
from celery import Celery
logger = get_task_logger(__name__)
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@[email protected]/cvcHost', backend='redis://:C@pV@[email protected]:6379/0')
@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
try:
"""
The actual implementation of the task
"""
except Exception as exc:
self.retry(exc=exc)
what i did notice though is:
when i change the broker settings in my workers to a bad password, i get could not connect to broker error.
when i change the result backend settings in my workers to a bad password, it runs normally as if everything is OK.
What could be possibly causing me those problems ?
EDIT
on my Redis server, i already enabled remote connection
/etc/redis/redis.conf
...
bind 0.0.0.0
...
| My guess is that your problem is in the password.
Your password has @ in it, which could be interpreted as a divider between the user:pass and the host section.
The workers stay in pending because they could not connect to the broker correctly.
From celery's documentation
http://docs.celeryproject.org/en/latest/userguide/tasks.html#pending
PENDING
Task is waiting for execution or unknown. Any task id that is not known is implied to be in the pending state.
| Redis | 35,539,778 | 23 |
I was using redis and jedis for quite some time and never needed the SCAN commands so far. Now however I need to use the SCAN commands, particularly hscan. I understand how it works on the redis level, but the jedis Java wrapper side is confusing to me. There are ScanResults and ScanParameter classes flowing around and I have no clear concept of how to use them properly. The documentation for this feature is non-existent or at least hard to find. Can anyone point out where to find decent examples of how to iterate over a hash using hscan with jedis?
Sorry to have no code, but what I tried so far just makes no sense whatsoever.
| In the good tradition of answering own questions, here is what I found out:
String key = "THEKEY";
ScanParams scanParams = new ScanParams().count(100);
String cur = redis.clients.jedis.ScanParams.SCAN_POINTER_START;
boolean cycleIsFinished = false;
while(!cycleIsFinished) {
ScanResult<Entry<String, String>> scanResult =
jedis.hscan(key, cur, scanParams);
List<Entry<String, String>> result = scanResult.getResult();
//do whatever with the key-value pairs in result
cur = scanResult.getStringCursor();
if (cur.equals("0")) {
cycleIsFinished = true;
}
}
The important part is that cur is a String variable and it is "0" if the scan is complete.
With the help of ScanParams I was able to define the approximate size of each chunk to get from the hash. Approximate, because the hash might change during the scan, so it may be that an element is returned twice in the loop.
| Redis | 33,842,026 | 23 |
I have used StackExchange.Redis for c# redis cache.
cache.StringSet("Key1", CustomerObject);
but I want to store data like
cache.StringSet("Key1", ListOfCustomer);
so that one key has all Customer List stored and it is easy to
search,group,filter customer Data also inside that List
Answers are welcome using ServiceStack.Redis or StackExchange.Redis
| If you use Stackechange.Redis, you can use the List methods on its API.
Here is a naive implementation of IList using a redis list to store the items.
Hopefully it can help you to understand some of the list API methods:
public class RedisList<T> : IList<T>
{
private static ConnectionMultiplexer _cnn;
private string key;
public RedisList(string key)
{
this.key = key;
_cnn = ConnectionMultiplexer.Connect("localhost");
}
private IDatabase GetRedisDb()
{
return _cnn.GetDatabase();
}
private string Serialize(object obj)
{
return JsonConvert.SerializeObject(obj);
}
private T Deserialize<T>(string serialized)
{
return JsonConvert.DeserializeObject<T>(serialized);
}
public void Insert(int index, T item)
{
var db = GetRedisDb();
var before = db.ListGetByIndex(key, index);
db.ListInsertBefore(key, before, Serialize(item));
}
public void RemoveAt(int index)
{
var db = GetRedisDb();
var value = db.ListGetByIndex(key, index);
if (!value.IsNull)
{
db.ListRemove(key, value);
}
}
public T this[int index]
{
get
{
var value = GetRedisDb().ListGetByIndex(key, index);
return Deserialize<T>(value.ToString());
}
set
{
Insert(index, value);
}
}
public void Add(T item)
{
GetRedisDb().ListRightPush(key, Serialize(item));
}
public void Clear()
{
GetRedisDb().KeyDelete(key);
}
public bool Contains(T item)
{
for (int i = 0; i < Count; i++)
{
if (GetRedisDb().ListGetByIndex(key, i).ToString().Equals(Serialize(item)))
{
return true;
}
}
return false;
}
public void CopyTo(T[] array, int arrayIndex)
{
GetRedisDb().ListRange(key).CopyTo(array, arrayIndex);
}
public int IndexOf(T item)
{
for (int i = 0; i < Count; i++)
{
if (GetRedisDb().ListGetByIndex(key, i).ToString().Equals(Serialize(item)))
{
return i;
}
}
return -1;
}
public int Count
{
get { return (int)GetRedisDb().ListLength(key); }
}
public bool IsReadOnly
{
get { return false; }
}
public bool Remove(T item)
{
return GetRedisDb().ListRemove(key, Serialize(item)) > 0;
}
public IEnumerator<T> GetEnumerator()
{
for (int i = 0; i < this.Count; i++)
{
yield return Deserialize<T>(GetRedisDb().ListGetByIndex(key, i).ToString());
}
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
for (int i = 0; i < this.Count; i++)
{
yield return Deserialize<T>(GetRedisDb().ListGetByIndex(key, i).ToString());
}
}
}
Note the use of Newtonsoft.Json for the serialization.
You will need the following nu-get packages:
Install-Package Newtonsoft.Json
Install-Package StackExchange.Redis
After reading your question and comments, since you want to access elements by key, I think you're looking for Redis Hashes, which are maps composed of fields associated with values.
So you can have a Redis Key for a Hash containing all your Customers, each one being a Value associated to a Field. You can choose the CustomerId as the Field, so you can then get a customer by its id in O(1).
I think implementing IDictionary is a good way to see it working.
So a RedisDictionary class similar to the RedisList but using a Redis Hash could be:
public class RedisDictionary<TKey, TValue> : IDictionary<TKey, TValue>
{
private static ConnectionMultiplexer _cnn;
private string _redisKey;
public RedisDictionary(string redisKey)
{
_redisKey = redisKey;
_cnn = ConnectionMultiplexer.Connect("localhost");
}
private IDatabase GetRedisDb()
{
return _cnn.GetDatabase();
}
private string Serialize(object obj)
{
return JsonConvert.SerializeObject(obj);
}
private T Deserialize<T>(string serialized)
{
return JsonConvert.DeserializeObject<T>(serialized);
}
public void Add(TKey key, TValue value)
{
GetRedisDb().HashSet(_redisKey, Serialize(key), Serialize(value));
}
public bool ContainsKey(TKey key)
{
return GetRedisDb().HashExists(_redisKey, Serialize(key));
}
public bool Remove(TKey key)
{
return GetRedisDb().HashDelete(_redisKey, Serialize(key));
}
public bool TryGetValue(TKey key, out TValue value)
{
var redisValue = GetRedisDb().HashGet(_redisKey, Serialize(key));
if (redisValue.IsNull)
{
value = default(TValue);
return false;
}
value = Deserialize<TValue>(redisValue.ToString());
return true;
}
public ICollection<TValue> Values
{
get { return new Collection<TValue>(GetRedisDb().HashValues(_redisKey).Select(h => Deserialize<TValue>(h.ToString())).ToList()); }
}
public ICollection<TKey> Keys
{
get { return new Collection<TKey>(GetRedisDb().HashKeys(_redisKey).Select(h => Deserialize<TKey>(h.ToString())).ToList()); }
}
public TValue this[TKey key]
{
get
{
var redisValue = GetRedisDb().HashGet(_redisKey, Serialize(key));
return redisValue.IsNull ? default(TValue) : Deserialize<TValue>(redisValue.ToString());
}
set
{
Add(key, value);
}
}
public void Add(KeyValuePair<TKey, TValue> item)
{
Add(item.Key, item.Value);
}
public void Clear()
{
GetRedisDb().KeyDelete(_redisKey);
}
public bool Contains(KeyValuePair<TKey, TValue> item)
{
return GetRedisDb().HashExists(_redisKey, Serialize(item.Key));
}
public void CopyTo(KeyValuePair<TKey, TValue>[] array, int arrayIndex)
{
GetRedisDb().HashGetAll(_redisKey).CopyTo(array, arrayIndex);
}
public int Count
{
get { return (int)GetRedisDb().HashLength(_redisKey); }
}
public bool IsReadOnly
{
get { return false; }
}
public bool Remove(KeyValuePair<TKey, TValue> item)
{
return Remove(item.Key);
}
public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator()
{
var db = GetRedisDb();
foreach (var hashKey in db.HashKeys(_redisKey))
{
var redisValue = db.HashGet(_redisKey, hashKey);
yield return new KeyValuePair<TKey, TValue>(Deserialize<TKey>(hashKey.ToString()), Deserialize<TValue>(redisValue.ToString()));
}
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
yield return GetEnumerator();
}
public void AddMultiple(IEnumerable<KeyValuePair<TKey, TValue>> items)
{
GetRedisDb()
.HashSet(_redisKey, items.Select(i => new HashEntry(Serialize(i.Key), Serialize(i.Value))).ToArray());
}
}
And here are some examples to use it:
// Insert customers to the cache
var customers = new RedisDictionary<int, Customer>("customers");
customers.Add(100, new Customer() { Id = 100, Name = "John" });
customers.Add(200, new Customer() { Id = 200, Name = "Peter" });
// Or if you have a list of customers retrieved from DB:
IList<Customer> customerListFromDb;
customers.AddMultiple(customerListFromDb.ToDictionary(k => k.Id));
// Query a customer by its id
var customers = new RedisDictionary<int, Customer>("customers");
Customer customer100 = customers[100];
Update (Oct 2015)
A better implementation of these collections can be found on CachingFramework.Redis library.
Here is the code.
| Redis | 31,955,977 | 23 |
Following is Jedis documentation directly copied from jedis github page:
List<JedisShardInfo> shards = new ArrayList<JedisShardInfo>();
JedisShardInfo si = new JedisShardInfo("localhost", 6379);
si.setPassword("foobared");
shards.add(si);
si = new JedisShardInfo("localhost", 6380);
si.setPassword("foobared");
shards.add(si);
Then, there are two ways of using ShardedJedis. Direct connections or by using ShardedJedisPool. For reliable operation, the latter has to be used in a multithreaded environment.
2.a) Direct connection:
ShardedJedis jedis = new ShardedJedis(shards);
jedis.set("a", "foo");
jedis.disconnect;
2.b) Pooled connection:
ShardedJedisPool pool = new ShardedJedisPool(new Config(), shards);
ShardedJedis jedis = pool.getResource();
jedis.set("a", "foo");
.... // do your work here
pool.returnResource(jedis);
.... // a few moments later
ShardedJedis jedis2 = pool.getResource();
jedis.set("z", "bar");
pool.returnResource(jedis);
pool.destroy();
Above example shows how to use ShardedJedis.
In my current setup, I am using RedisTemplate and JedisConnectionFactory.
My question is
How do I use ShardedJedis with RedisTemplate?
| I think it doesn't directly support your case. RedisTemplate offers a high-level abstraction for Redis interactions.
While RedisConnection offers low level methods that accept and return binary values (byte arrays).
See: Working with Objects through RedisTemplate
| Redis | 29,616,706 | 23 |
I am using django-rq to handle some long-running tasks on my django site. These tasks trip the 180 second timeout of the (I assume) rqworker:
JobTimeoutException: Job exceeded maximum timeout value (180 seconds).
How can I increase this timeout value? I've tried adding --timeout 360 to the rqworker command but this isn't recognized.
In my python code, my long-running job is called via
django_rq.enqueue(
populate_trends,
self,
)
and have tried
django_rq.enqueue_call(
func=populate_trends,
args=(self,),
timeout=3600,
)
which I noticed in the rq docs but django-rq has no such method it seems.
Update
For now I forked django-rq and added a placeholder fix to increase the timeout. Probably need to work with the project to get a longer term solution. I've started an issue there to discuss.
| This seems to be the right way to approach the problem.
queue = django_rq.get_queue('default')
queue.enqueue(populate_trends, args=(self,), timeout=500)
If you need to pass kwargs,
queue = django_rq.get_queue('default')
queue.enqueue(populate_trends, args=(self,), kwargs={'x': 1,}, timeout=500)
Thanks to the selwin at the django-rq project for the help.
| Redis | 15,445,036 | 23 |
I was reading Redis documentation, and I am most interested in the partitioning feature.
Redis documentation states the following:
Data store or cache? Partitioning when using Redis ad a data store or
cache is conceptually the same, however there is a huge difference.
While when Redis is used as a data store you need to be sure that a
given key always maps to the same instance, when Redis is used as a
cache if a given node is unavailable it is not a big problem if we
start using a different node, altering the key-instance map as we wish
to improve the availability of the system (that is, the ability of the
system to reply to our queries). Consistent hashing implementations
are often able to switch to other nodes if the preferred node for a
given key is not available. Similarly if you add a new node, part of
the new keys will start to be stored on the new node. The main concept
here is the following: If Redis is used as a cache scaling up and down
using consistent hashing is easy. If Redis is used as a store, we need
to take the map between keys and nodes fixed, and a fixed number of
nodes. Otherwise we need a system that is able to rebalance keys
between nodes when we add or remove nodes, and currently only Redis
Cluster is able to do this, but Redis Cluster is not production ready.
From the last sentence I understand that Redis Cluster is not production ready. Does anyone knows whether this documentation is up to date, or Redis Cluster is already production ready?
| [Update] Redis Cluster was released in Redis 3.0.0 on 1 Apr 2015.
Redis cluster is currently in active development. See this article from Redis author: Antirez.
So I can pause other incremental improvements for a bit to focus on Redis Cluster. Basically my plan is to work mostly to cluster as long as it does not reach beta quality, and for beta quality I mean, something that brave users may put into production.
Redis Cluster will support up to ~1000 nodes.
The first release will have the following features (extracted from Antirez post):
Automatic partition of key space.
Hot resharding.
Only single key operations supported (and it will always be that way).
As of today antirez is working on the first Redis cluster client (redis-rb-cluster) in order to be used as a reference implementation.
I'll update this answer as soon as Redis Cluster goes production ready.
[Update] 03/28/2014 Redis Cluster is already used on large cluster in production (source: antirez tweets).
| Redis | 14,941,897 | 23 |
I have a model class that caches data in redis. The first time I call a method on the model, it computes a JSON/Hash value and stores it in Redis. Under certain circumstances I 'flush' that data and it gets recomputed on the next call.
Here's the code snippet similar to the one I use to store the data in Redis:
def cache_data
self.data_values = data_to_cache
REDIS.set(redis_key,ActiveSupport::JSON.encode(self.data_values))
REDIS.get(redis_key)
end
def data_to_cache
# generate a hash of values to return
end
How should I unit test this code? I use RSpec and Capybara. I also use Cucumber and Capabara for integration testing if that helps.
| I like to have redis running while the tests are running. Redis, unlike e.g. postgres, is extremely fast and doesn't slow down test run time noticeably.
Just make sure you call REDIS.flush in a before(:each) block, or the corresponding cucumber hook.
You can test data_to_cache independently of redis, but unless you can fully trust the redis driver you're using and the contract it provides, it's safer to actually test cache_data (and the corresponding cache fetch method) live. That also allows you to switch to a different redis driver (or to a different fast KV store) without a wholesale rewrite of your tests.
| Redis | 10,501,461 | 23 |
Premise
Hi,
I received multiple reports from a Redis user that experienced server crashes, using a Redis stable release (latest, 2.4.6). The bug is strange since the user is not doing esoteric things, just working a lot with the sorted set type, and only with the ZADD, ZREM, and ZREVRANK commands. However it is strange that a bug like that, causing crashes after a few billion operations executed, was only experienced by a single user. Fortunately the user in question is extremely helpful and collaborated a lot in the tracking of the issue, so I was able to obtain many times logs with the exact sequence of operations performed by Redis, that I re-played locally without result, I also tried to write scripts to closely mimic the kind of work load, to perform in-depth code reviews of the skip list implementation, and so forth.
Even after all this efforts no way to reproduce the issue locally.
It is also worth to mention that at some point the the user started sending the exact same traffic to another box running the same Redis version, but compiled with another gcc, and running in different hardware: so far no issues in this second instance. Still I want to understand what is happening.
So finally I setup a different strategy with the user and asked him to run Redis using gdb, in order to obtain a core file. Finally Redis crashed again and I now have both the core file and the executable. Unfortunately I forgot to ask the user to compile Redis without optimizations.
I need the stack overflow community help since with GDB I reach some conclusion but I've no really idea what could be happening here: at some point a function computes a pointer, and when it calls another function magically that pointer is different, pointing to a memory location that does not hold the right kind of data.
GDB session
The original executable was compiled with GCC 4.4.5-8, this is a GDB session that shows my investigation:
gdb ./redis-server core.16525
GNU gdb (GDB) 7.1-ubuntu
[snip]
Program terminated with signal 11, Segmentation fault.
#0 0x00007f3d9ecd216c in __pthread_rwlock_tryrdlock (rwlock=0x1)
at pthread_rwlock_tryrdlock.c:46
46 pthread_rwlock_tryrdlock.c: No such file or directory.
in pthread_rwlock_tryrdlock.c
Actually the strack trace shown is about a secondary thread doing nothing (you can safely consider Redis a single-threaded app, the other threads are only used to perform things like fsync() against a file descriptor without blocking), let's select the right one.
(gdb) info threads
3 Thread 16525 zslGetRank (zsl=0x7f3d8d71c360, score=19.498544884710096,
o=0x7f3d4cab5760) at t_zset.c:335
2 Thread 16527 0x00007f3d9ecd216c in __pthread_rwlock_tryrdlock (
rwlock=0x6b7f5) at pthread_rwlock_tryrdlock.c:46
* 1 Thread 16526 0x00007f3d9ecd216c in __pthread_rwlock_tryrdlock (rwlock=0x1)
at pthread_rwlock_tryrdlock.c:46
(gdb) thread 3
[Switching to thread 3 (Thread 16525)]#0 zslGetRank (zsl=0x7f3d8d71c360,
score=19.498544884710096, o=0x7f3d4cab5760) at t_zset.c:335
335 t_zset.c: No such file or directory.
in t_zset.c
(gdb) bt
#0 zslGetRank (zsl=0x7f3d8d71c360, score=19.498544884710096, o=0x7f3d4cab5760)
at t_zset.c:335
#1 0x000000000042818b in zrankGenericCommand (c=0x7f3d9dcdc000, reverse=1)
at t_zset.c:2046
#2 0x00000000004108d4 in call (c=0x7f3d9dcdc000) at redis.c:1024
#3 0x0000000000410c1c in processCommand (c=0x7f3d9dcdc000) at redis.c:1130
#4 0x0000000000419d3f in processInputBuffer (c=0x7f3d9dcdc000)
at networking.c:865
#5 0x0000000000419e1c in readQueryFromClient (el=<value optimized out>,
fd=<value optimized out>, privdata=0x7f3d9dcdc000,
mask=<value optimized out>) at networking.c:908
#6 0x000000000040d4a3 in aeProcessEvents (eventLoop=0x7f3d9dc47000,
flags=<value optimized out>) at ae.c:342
#7 0x000000000040d6ee in aeMain (eventLoop=0x7f3d9dc47000) at ae.c:387
#8 0x0000000000412a4f in main (argc=2, argv=<value optimized out>)
at redis.c:1719
We also generated a backtrace. As you can see call() is dispatching the ZREVRANK command, so the zrankGenericCommand() is called with the client structure and the reverse=1 (since it is REV rank) argument. We can easily investigate to check what are the arguments of the ZREVRANK command.
(gdb) up
#1 0x000000000042818b in zrankGenericCommand (c=0x7f3d9dcdc000, reverse=1)
at t_zset.c:2046
2046 in t_zset.c
(gdb) print c->argc
$8 = 3
(gdb) print (redisClient*)c->argc
$9 = (redisClient *) 0x3
(gdb) print (char*)(redisClient*)c->argv[0]->ptr
$10 = 0x7f3d8267ce28 "zrevrank"
(gdb) print (char*)(redisClient*)c->argv[1]->ptr
$11 = 0x7f3d8267ce48 "pc_stat.hkperc"
(gdb) print (long)(redisClient*)c->argv[2]->ptr
$12 = 282472606
So the actual command generating the crash was: ZREVRANK pc_stat.hkperc 282472606
This is consistent with the client logs obtained by the user. Note that I casted the pointer to a long integer for the latest argument, since Redis encodes integers this way to save memory when possible.
Now that's fine, it is now time to investigate the zrankGenericCommand() that called zslGetRan() that caused the actual crash. This is the C source code of zrankGenericCommand around like 2046:
2036 } else if (zobj->encoding == REDIS_ENCODING_SKIPLIST) {
2037 zset *zs = zobj->ptr;
2038 zskiplist *zsl = zs->zsl;
2039 dictEntry *de;
2040 double score;
2041
2042 ele = c->argv[2] = tryObjectEncoding(c->argv[2]);
2043 de = dictFind(zs->dict,ele);
2044 if (de != NULL) {
2045 score = *(double*)dictGetEntryVal(de);
2046 rank = zslGetRank(zsl,score,ele);
2047 redisAssert(rank); /* Existing elements always have a rank. */
2048 if (reverse)
2049 addReplyLongLong(c,llen-rank);
2050 else
2051 addReplyLongLong(c,rank-1);
2052 } else {
2053 addReply(c,shared.nullbulk);
2054 }
2055 }
Ok this is how it works:
We lookup a Redis key, containing a sorted set data type (lookup not included in the code). The Redis Object associated with the key is stored in the zobj local variable.
The zobj ptr field is a pointer to a structure of type zset representing the sorted set.
In turn the zset structure has two pointers, one points to an hash table, and one to a skip list. This is needed since We both provide element-to-score lookups in O(1) for which we need an hash table, but also we take the elements ordered so we use a skip list. In line 2038 the pointer to the skip list (represented by a zskiplist structure) is assigned to the zsl variable.
Later we encode the third argument (line 2042), this is why we casted the value to a long to print it from the client structure.
In line 2043 we lookup the element from the dictionary, and the operation succeeds since we know that the function zslGetRank() side the if branch gets called.
Finally in line 2046 we call zslGetRank() with three arguments: the pointer to the skip list, the score of the element, and the element itself.
Fine... now what is the pointer that zslGetRank() should receive in theory? We can easily investigate this manually looking up the Redis hash table. I hashed manually the key and it maps to bucket 62 of the hash table, let's see if it is true:
(gdb) print (char*)c->db->dict->ht->table[62]->key
$13 = 0x7f3d9dc0f6c8 "pc_stat.hkperc"
Exactly as expected. Let's check the object associated:
(gdb) print *(robj*)c->db->dict->ht->table[62]->val
$16 = {type = 3, storage = 0, encoding = 7, lru = 557869, refcount = 1,
ptr = 0x7f3d9de574b0}
Type = 3, Encoding = 7, it means: it is a sorted set, encoded as a skip list. Fine again.
The sorted set address (ptr field) is 0x7f3d9de574b0, so we can inspect this as well:
(gdb) print *(zset*)0x7f3d9de574b0
$17 = {dict = 0x7f3d9dcf6c20, zsl = 0x7f3d9de591c0}
So we have:
The object associated to the key that points to a sorted set that is stored at address 0x7f3d9de574b0
In turn this sorted set is implemented with a skiplist, at address 0x7f3d9de591c0 (zsl field)
Now let's check if our two variables are set to the right values:
2037 zset *zs = zobj->ptr;
2038 zskiplist *zsl = zs->zsl;
(gdb) info locals
zs = 0x7f3d9de574b0
zsl = 0x7f3d9de591c0
de = <value optimized out>
ele = <value optimized out>
zobj = <value optimized out>
llen = 165312
rank = <value optimized out>
Everything is perfect so far: the variable zs is set to 0x7f3d9de574b0 as expected, and so is the variable zsl pointing to the skiplist, that is set to 0x7f3d9de591c0.
Now this variables are no touched in the course of the code execution:
This are the only lines of code between the assignment of the variables and the call to the zslGetRank() function:
2042 ele = c->argv[2] = tryObjectEncoding(c->argv[2]);
2043 de = dictFind(zs->dict,ele);
2044 if (de != NULL) {
2045 score = *(double*)dictGetEntryVal(de);
2046 rank = zslGetRank(zsl,score,ele);
Nobody is touching zsl, however if we check the stack trace we see that the zslGetRank() function gets called not with the address 0x7f3d9de591c0 as first argument, but with a different one:
#0 zslGetRank (zsl=0x7f3d8d71c360, score=19.498544884710096, o=0x7f3d4cab5760)
at t_zset.c:335
If you read all this you are an hero, and the reward is very small, consisting in this question: do you have an idea, even considering that hardware failure is an option, about how this argument gets modified? It seems very unlikely that the object encoding function or the hash table lookup can corrupt the stack of the caller (but apparently the argument is inside registers already at the time of the call). My assembler is not great, so if you have some clue... it is very welcomed. I'll left you with an info registers output and a disassemble:
(gdb) info registers
rax 0x6 6
rbx 0x7f3d9dcdc000 139902617239552
rcx 0xf742d0b6 4148351158
rdx 0x7f3d95efada0 139902485245344
rsi 0x7f3d4cab5760 139901256030048
rdi 0x7f3d8d71c360 139902342775648
rbp 0x7f3d4cab5760 0x7f3d4cab5760
rsp 0x7fffe61a8040 0x7fffe61a8040
r8 0x7fffe61a7fd9 140737053884377
r9 0x1 1
r10 0x7f3d9dcd4ff0 139902617210864
r11 0x6 6
r12 0x1 1
r13 0x7f3d9de574b0 139902618793136
r14 0x7f3d9de591c0 139902618800576
r15 0x7f3d8267c9e0 139902157572576
rip 0x42818b 0x42818b <zrankGenericCommand+251>
eflags 0x10206 [ PF IF RF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
(gdb) disassemble zrankGenericCommand
Dump of assembler code for function zrankGenericCommand:
0x0000000000428090 <+0>: mov %rbx,-0x30(%rsp)
0x0000000000428095 <+5>: mov %r12,-0x20(%rsp)
0x000000000042809a <+10>: mov %esi,%r12d
0x000000000042809d <+13>: mov %r14,-0x10(%rsp)
0x00000000004280a2 <+18>: mov %rbp,-0x28(%rsp)
0x00000000004280a7 <+23>: mov %rdi,%rbx
0x00000000004280aa <+26>: mov %r13,-0x18(%rsp)
0x00000000004280af <+31>: mov %r15,-0x8(%rsp)
0x00000000004280b4 <+36>: sub $0x58,%rsp
0x00000000004280b8 <+40>: mov 0x28(%rdi),%rax
0x00000000004280bc <+44>: mov 0x23138d(%rip),%rdx # 0x659450 <shared+80>
0x00000000004280c3 <+51>: mov 0x8(%rax),%rsi
0x00000000004280c7 <+55>: mov 0x10(%rax),%rbp
0x00000000004280cb <+59>: callq 0x41d370 <lookupKeyReadOrReply>
0x00000000004280d0 <+64>: test %rax,%rax
0x00000000004280d3 <+67>: mov %rax,%r14
0x00000000004280d6 <+70>: je 0x4280ec <zrankGenericCommand+92>
0x00000000004280d8 <+72>: mov $0x3,%edx
0x00000000004280dd <+77>: mov %rax,%rsi
0x00000000004280e0 <+80>: mov %rbx,%rdi
0x00000000004280e3 <+83>: callq 0x41b270 <checkType>
0x00000000004280e8 <+88>: test %eax,%eax
0x00000000004280ea <+90>: je 0x428110 <zrankGenericCommand+128>
0x00000000004280ec <+92>: mov 0x28(%rsp),%rbx
0x00000000004280f1 <+97>: mov 0x30(%rsp),%rbp
0x00000000004280f6 <+102>: mov 0x38(%rsp),%r12
0x00000000004280fb <+107>: mov 0x40(%rsp),%r13
0x0000000000428100 <+112>: mov 0x48(%rsp),%r14
0x0000000000428105 <+117>: mov 0x50(%rsp),%r15
0x000000000042810a <+122>: add $0x58,%rsp
0x000000000042810e <+126>: retq
0x000000000042810f <+127>: nop
0x0000000000428110 <+128>: mov %r14,%rdi
0x0000000000428113 <+131>: callq 0x426250 <zsetLength>
0x0000000000428118 <+136>: testw $0x3c0,0x0(%rbp)
0x000000000042811e <+142>: jne 0x4282b7 <zrankGenericCommand+551>
0x0000000000428124 <+148>: mov %eax,%eax
0x0000000000428126 <+150>: mov %rax,0x8(%rsp)
0x000000000042812b <+155>: movzwl (%r14),%eax
0x000000000042812f <+159>: and $0x3c0,%ax
0x0000000000428133 <+163>: cmp $0x140,%ax
0x0000000000428137 <+167>: je 0x4281c8 <zrankGenericCommand+312>
0x000000000042813d <+173>: cmp $0x1c0,%ax
0x0000000000428141 <+177>: jne 0x428299 <zrankGenericCommand+521>
0x0000000000428147 <+183>: mov 0x28(%rbx),%r15
0x000000000042814b <+187>: mov 0x8(%r14),%r13
0x000000000042814f <+191>: mov 0x10(%r15),%rdi
0x0000000000428153 <+195>: mov 0x8(%r13),%r14
0x0000000000428157 <+199>: callq 0x41bcc0 <tryObjectEncoding>
0x000000000042815c <+204>: mov 0x0(%r13),%rdi
0x0000000000428160 <+208>: mov %rax,0x10(%r15)
0x0000000000428164 <+212>: mov %rax,%rsi
0x0000000000428167 <+215>: mov %rax,%rbp
0x000000000042816a <+218>: callq 0x40ede0 <dictFind>
0x000000000042816f <+223>: test %rax,%rax
0x0000000000428172 <+226>: je 0x428270 <zrankGenericCommand+480>
0x0000000000428178 <+232>: mov 0x8(%rax),%rax
0x000000000042817c <+236>: mov %rbp,%rsi
0x000000000042817f <+239>: mov %r14,%rdi
0x0000000000428182 <+242>: movsd (%rax),%xmm0
0x0000000000428186 <+246>: callq 0x427fd0 <zslGetRank>
=> 0x000000000042818b <+251>: test %rax,%rax
0x000000000042818e <+254>: je 0x4282d5 <zrankGenericCommand+581>
0x0000000000428194 <+260>: test %r12d,%r12d
0x0000000000428197 <+263>: je 0x4281b0 <zrankGenericCommand+288>
0x0000000000428199 <+265>: mov 0x8(%rsp),%rsi
0x000000000042819e <+270>: mov %rbx,%rdi
0x00000000004281a1 <+273>: sub %rax,%rsi
0x00000000004281a4 <+276>: callq 0x41a430 <addReplyLongLong>
0x00000000004281a9 <+281>: jmpq 0x4280ec <zrankGenericCommand+92>
0x00000000004281ae <+286>: xchg %ax,%ax
0x00000000004281b0 <+288>: lea -0x1(%rax),%rsi
0x00000000004281b4 <+292>: mov %rbx,%rdi
0x00000000004281b7 <+295>: callq 0x41a430 <addReplyLongLong>
0x00000000004281bc <+300>: nopl 0x0(%rax)
0x00000000004281c0 <+304>: jmpq 0x4280ec <zrankGenericCommand+92>
0x00000000004281c5 <+309>: nopl (%rax)
0x00000000004281c8 <+312>: mov 0x8(%r14),%r14
0x00000000004281cc <+316>: xor %esi,%esi
0x00000000004281ce <+318>: mov %r14,%rdi
0x00000000004281d1 <+321>: callq 0x417600 <ziplistIndex>
0x00000000004281d6 <+326>: test %rax,%rax
0x00000000004281d9 <+329>: mov %rax,0x18(%rsp)
0x00000000004281de <+334>: je 0x428311 <zrankGenericCommand+641>
0x00000000004281e4 <+340>: mov %rax,%rsi
0x00000000004281e7 <+343>: mov %r14,%rdi
0x00000000004281ea <+346>: callq 0x4175c0 <ziplistNext>
0x00000000004281ef <+351>: test %rax,%rax
0x00000000004281f2 <+354>: mov %rax,0x10(%rsp)
0x00000000004281f7 <+359>: je 0x4282f3 <zrankGenericCommand+611>
0x00000000004281fd <+365>: mov 0x18(%rsp),%rdi
0x0000000000428202 <+370>: mov $0x1,%r13d
0x0000000000428208 <+376>: lea 0x10(%rsp),%r15
0x000000000042820d <+381>: test %rdi,%rdi
0x0000000000428210 <+384>: jne 0x428236 <zrankGenericCommand+422>
0x0000000000428212 <+386>: jmp 0x428270 <zrankGenericCommand+480>
0x0000000000428214 <+388>: nopl 0x0(%rax)
0x0000000000428218 <+392>: lea 0x18(%rsp),%rsi
0x000000000042821d <+397>: mov %r14,%rdi
0x0000000000428220 <+400>: mov %r15,%rdx
0x0000000000428223 <+403>: callq 0x425610 <zzlNext>
0x0000000000428228 <+408>: mov 0x18(%rsp),%rdi
0x000000000042822d <+413>: test %rdi,%rdi
0x0000000000428230 <+416>: je 0x428270 <zrankGenericCommand+480>
0x0000000000428232 <+418>: add $0x1,%r13
0x0000000000428236 <+422>: mov 0x8(%rbp),%rsi
0x000000000042823a <+426>: movslq -0x8(%rsi),%rdx
0x000000000042823e <+430>: callq 0x417a40 <ziplistCompare>
0x0000000000428243 <+435>: test %eax,%eax
0x0000000000428245 <+437>: je 0x428218 <zrankGenericCommand+392>
0x0000000000428247 <+439>: cmpq $0x0,0x18(%rsp)
0x000000000042824d <+445>: je 0x428270 <zrankGenericCommand+480>
0x000000000042824f <+447>: test %r12d,%r12d
0x0000000000428252 <+450>: je 0x428288 <zrankGenericCommand+504>
0x0000000000428254 <+452>: mov 0x8(%rsp),%rsi
0x0000000000428259 <+457>: mov %rbx,%rdi
0x000000000042825c <+460>: sub %r13,%rsi
0x000000000042825f <+463>: callq 0x41a430 <addReplyLongLong>
0x0000000000428264 <+468>: jmpq 0x4280ec <zrankGenericCommand+92>
0x0000000000428269 <+473>: nopl 0x0(%rax)
0x0000000000428270 <+480>: mov 0x2311d9(%rip),%rsi # 0x659450 <shared+80>
0x0000000000428277 <+487>: mov %rbx,%rdi
0x000000000042827a <+490>: callq 0x419f60 <addReply>
0x000000000042827f <+495>: jmpq 0x4280ec <zrankGenericCommand+92>
0x0000000000428284 <+500>: nopl 0x0(%rax)
0x0000000000428288 <+504>: lea -0x1(%r13),%rsi
0x000000000042828c <+508>: mov %rbx,%rdi
0x000000000042828f <+511>: callq 0x41a430 <addReplyLongLong>
0x0000000000428294 <+516>: jmpq 0x4280ec <zrankGenericCommand+92>
0x0000000000428299 <+521>: mov $0x44939f,%edi
0x000000000042829e <+526>: mov $0x808,%edx
0x00000000004282a3 <+531>: mov $0x44a674,%esi
0x00000000004282a8 <+536>: callq 0x432010 <_redisPanic>
0x00000000004282ad <+541>: mov $0x1,%edi
0x00000000004282b2 <+546>: callq 0x40c3a0 <_exit@plt>
0x00000000004282b7 <+551>: mov $0x44a7d0,%edi
0x00000000004282bc <+556>: mov $0x7da,%edx
0x00000000004282c1 <+561>: mov $0x44a674,%esi
0x00000000004282c6 <+566>: callq 0x432090 <_redisAssert>
0x00000000004282cb <+571>: mov $0x1,%edi
0x00000000004282d0 <+576>: callq 0x40c3a0 <_exit@plt>
0x00000000004282d5 <+581>: mov $0x448982,%edi
0x00000000004282da <+586>: mov $0x7ff,%edx
0x00000000004282df <+591>: mov $0x44a674,%esi
0x00000000004282e4 <+596>: callq 0x432090 <_redisAssert>
0x00000000004282e9 <+601>: mov $0x1,%edi
0x00000000004282ee <+606>: callq 0x40c3a0 <_exit@plt>
0x00000000004282f3 <+611>: mov $0x44a6e5,%edi
0x00000000004282f8 <+616>: mov $0x7e2,%edx
0x00000000004282fd <+621>: mov $0x44a674,%esi
0x0000000000428302 <+626>: callq 0x432090 <_redisAssert>
0x0000000000428307 <+631>: mov $0x1,%edi
0x000000000042830c <+636>: callq 0x40c3a0 <_exit@plt>
0x0000000000428311 <+641>: mov $0x44a6bd,%edi
0x0000000000428316 <+646>: mov $0x7e0,%edx
0x000000000042831b <+651>: mov $0x44a674,%esi
0x0000000000428320 <+656>: callq 0x432090 <_redisAssert>
0x0000000000428325 <+661>: mov $0x1,%edi
0x000000000042832a <+666>: callq 0x40c3a0 <_exit@plt>
End of assembler dump.
As requested, this is the tryObjectEncoding function:
/* Try to encode a string object in order to save space */
robj *tryObjectEncoding(robj *o) {
long value;
sds s = o->ptr;
if (o->encoding != REDIS_ENCODING_RAW)
return o; /* Already encoded */
/* It's not safe to encode shared objects: shared objects can be shared
* everywhere in the "object space" of Redis. Encoded objects can only
* appear as "values" (and not, for instance, as keys) */
if (o->refcount > 1) return o;
/* Currently we try to encode only strings */
redisAssert(o->type == REDIS_STRING);
/* Check if we can represent this string as a long integer */
if (!string2l(s,sdslen(s),&value)) return o;
/* Ok, this object can be encoded...
*
* Can I use a shared object? Only if the object is inside a given
* range and if this is the main thread, since when VM is enabled we
* have the constraint that I/O thread should only handle non-shared
* objects, in order to avoid race conditions (we don't have per-object
* locking).
*
* Note that we also avoid using shared integers when maxmemory is used
* because very object needs to have a private LRU field for the LRU
* algorithm to work well. */
if (server.maxmemory == 0 && value >= 0 && value < REDIS_SHARED_INTEGERS &&
pthread_equal(pthread_self(),server.mainthread)) {
decrRefCount(o);
incrRefCount(shared.integers[value]);
return shared.integers[value];
} else {
o->encoding = REDIS_ENCODING_INT;
sdsfree(o->ptr);
o->ptr = (void*) value;
return o;
}
}
| I think I can answer my own question now...
basically this is what happens. zslGetRank() is called by zrankGenericCommand() with first argument into %rdi register. However later this function will use the %rdi register to set an object (and indeed the %rdi register is set to an object that is valid):
(gdb) print *(robj*)0x7f3d8d71c360
$1 = {type = 0, storage = 0, encoding = 1, lru = 517611, refcount = 2,
ptr = 0x1524db19}
The instruction pointer actually pointed to zslGetRank+64 at the time of the crash, I did something wrong with gdb and modified the register before posting the original question.
Also how to verify that zslGetRank() gets the right address as first argument? Because %r14 gets saved on the stack by zslGetRank() so we can inspect the stack to check if there is a the right location. So we dump near the stack pointer:
0x7fffe61a8000: 0x40337fa0a3376aff 0x00007f3d9dcdc000
0x7fffe61a8010: 0x00007f3d9dcdc000 0x00007f3d4cab5760
0x7fffe61a8020: 0x0000000000000001 0x00007f3d9de574b0
---> 0x7fffe61a8030: 0x00007f3d9de591c0 0x000000000042818b
0x7fffe61a8040: 0x0000000000000000 0x00000000000285c0
0x7fffe61a8050: 0x0000000000000000 0x00007f3d9dcdc000
0x7fffe61a8060: 0x0000000000000000 0x00007f3d9dcdc000
0x7fffe61a8070: 0x0000000000000000 0x0004b6b413e12d9a
0x7fffe61a8080: 0x00000000000003d8 0x0000000000000001
As you can see the right address is here in the stack.
So long story short, the function is called with the right address, it is just that gdb can't dump the right stack trace because the %rdi register gets modified and used for another thing inside the function.
So this can be a memory corruption thing, possibly. What I'll do now is to walk the sorted set by hand simulating the work of zslGetRank() so that I can isolate the node that is broken, and check hopefully in which way it is corrupted.
Thanks for your help.
Edit: here you can find a manually annotated disassembled version of zslGetRank() function -> https://gist.github.com/1641112 (I used it to both learn some more assembler and to make my inspection simpler).
| Redis | 8,911,883 | 23 |
I am using nowjs and node_redis. I am trying to create something very simple. But so far, the tutorial have left me blank because they only do console.log().
//REDIS
var redis = require("redis"),
client = redis.createClient();
client.on("error", function (err) {
console.log("Error "+ err);
});
client.set("card", "apple");
everyone.now.signalShowRedisCard = function() {
nowjs.getGroup(this.now.room).now.receiveShowRedisCard(client.get("card").toString());
}
In my client side:
now.receiveShowRedisCard = function(card_id) {
alert("redis card: "+card_id);
}
The alert only gives out "true" - I was expecting to get the value of the key "card" which is "apple".
Any ideas?
| You are trying to use an async library in a sync way. This is the right way:
//REDIS
var redis = require("redis"),
client = redis.createClient();
client.on("error", function (err) {
console.log("Error "+ err);
});
client.set("card", "apple", function(err) {
if (err) throw err;
});
everyone.now.signalShowRedisCard = function() {
var self = this;
client.get("card", function (err, res) {
nowjs.getGroup(self.now.room).now.receiveShowRedisCard(res);
});
}
| Redis | 6,924,639 | 23 |
So I've already read this post about there not being an MGET analog for Redis hashes. One of the answers said to use MULTI/EXEC to do the operation in bulk, and that does work for lists and regular keys, but not for hashes, unfortunately. Right now, however, I'm doing a call over the wire for every single hash I want to retrieve which seems like bad news to me.
So my question is: what is the most efficient way to get several hashes back from Redis, with the standard of efficiency being the least number of network calls? I'm using Redis 2.0.4, programming with the Python client. Thanks!
| The most efficient way would be using a pipeline.
Assuming you want everything for a given key and know all the keys already:
import redis
r = redis.Redis(host='localhost', port=6379, db=0)
p = r.pipeline()
for key in keys:
p.hgetall(key)
for h in p.execute():
print h
More information about pipelines can be found here: http://redis.io/topics/pipelining
| Redis | 4,929,202 | 23 |
I'm a newcomer to Redis and I'm looking for some specific help around sets. To give some background: I'm building a web-app which consists of a large number of card decks which each have a set of individual cards with unique ids. I want users to have a set of 5 cards drawn for them at random from a specific deck.
My plan is to have all of the card ids of a given deck stored as a set in Redis; then I want to use the SPOP function to draw individual cards and remove them from the set so that they are not drawn again within that hand. It would seem to make sense to do this by copying the deck's 'master set' of card IDs into a new temporary set, performing the popping on the copy and then deleting the copied set when I'm done.
But: I can't find any Redis function to command a set copy - the closest thing I can see would be to also create an empty set and then 'join' the empty set and the 'master copy' of the set into a new (if temporary) set with SUNIONSTORE, but that seems hacky. I suppose an alternative would be to copy the set items out into my 'host language' (node.js) and then manually insert the items back into a new Redis set, but this also seems clunky. There's probably a better third option that I haven't even thought of.
Am I doing something wrong - am I not 'getting' Redis, or is the command-set still a bit immature?
| redis> sadd mydeck 1
(integer) 1
redis> sadd mydeck 2
(integer) 1
redis> sadd mydeck 3
(integer) 1
redis> smembers mydeck
1) "1"
2) "2"
3) "3"
redis> sunionstore tempdeck mydeck
(integer) 3
redis> smembers mydeck
1) "1"
2) "2"
3) "3"
redis> smembers tempdeck
1) "1"
2) "2"
3) "3"
Have fun with Redis!
Salvatore
| Redis | 4,474,770 | 23 |
Application consists of:
- Django
- Redis
- Celery
- Docker
- Postgres
Before merging the project into docker, everything was working smooth and fine, but once it has been moved into containers, something wrong started to happen.
At first it starts perfectly fine, but after a while I do receive folowing error:
celery-beat_1 | ERROR: Pidfile (celerybeat.pid) already exists.
I've been struggling with it for a while, but right now I literally give up. I've no idea of what is wrong with it.
Dockerfile:
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY /scripts/startup/entrypoint.sh entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
COPY . /opt/services/djangoapp/src
RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \;
RUN sed -i "s|django.core.urlresolvers|django.urls |g" /usr/local/lib/python3.7/site-packages/vanilla/views.py
RUN cp /usr/local/lib/python3.7/site-packages/celery/backends/async.py /usr/local/lib/python3.7/site-packages/celery/backends/asynchronous.py
RUN rm /usr/local/lib/python3.7/site-packages/celery/backends/async.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/redis.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/rpc.py
RUN cd app && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "app", "example.wsgi:application", "--reload"]
docker-compose.yml:
version: '3'
services:
djangoapp:
build: .
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
- .:/code
restart: always
networks:
- nginx_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
- redis_network
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- migration
- redis
# base redis server
redis:
image: "redis:alpine"
restart: always
ports:
- "6379:6379"
networks:
- redis_network
volumes:
- redis_data:/data
# celery worker
celery:
build: .
command: >
bash -c "cd app && celery -A example worker --without-gossip --without-mingle --without-heartbeat -Ofair"
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
networks:
- redis_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
restart: always
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- redis
links:
- redis
celery-beat:
build: .
command: >
bash -c "cd app && celery -A example beat"
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
networks:
- redis_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
restart: always
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- redis
links:
- redis
# migrations needed for proper db functioning
migration:
build: .
command: >
bash -c "cd app && python3 manage.py makemigrations && python3 manage.py migrate"
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
networks:
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 80:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
restart: always
depends_on:
- djangoapp
networks:
- nginx_network
database1: # comment when testing
image: postgres:10 # comment when testing
env_file: # comment when testing
- config/db/database1_env # comment when testing
networks: # comment when testing
- database1_network # comment when testing
volumes: # comment when testing
- database1_volume:/var/lib/postgresql/data # comment when testing
# test_database1: # uncomment when testing
# image: postgres:10 # uncomment when testing
# env_file: # uncomment when testing
# - config/db/test_database1_env # uncomment when testing
# networks: # uncomment when testing
# - test_database1_network # uncomment when testing
# volumes: # uncomment when testing
# - test_database1_volume:/var/lib/postgresql/data # uncomment when testing
networks:
nginx_network:
driver: bridge
database1_network: # comment when testing
driver: bridge # comment when testing
# test_database1_network: # uncomment when testing
# driver: bridge # uncomment when testing
redis_network:
driver: bridge
volumes:
database1_volume: # comment when testing
# test_database1_volume: # uncomment when testing
static_volume: # <-- declare the static volume
media_volume: # <-- declare the media volume
static_local_volume:
media_local_volume:
redis_data:
Please, ignore "test_database1_volume" as it exists only for test purposes.
| Another solution (taken from https://stackoverflow.com/a/17674248/39296) is to use --pidfile= (with no path) to not create a pidfile at all. Same effect as Siyu's answer above.
| Redis | 53,521,959 | 22 |
I am trying to setup a docker-compose file that is intended to replace a single Docker container solution that runs several processes (RQ worker, RQ dashboard and a Flask application) with Supervisor.
The host system is a Debian 8 Linux and my docker-compose.yml looks like this (I deleted all other entries to reduce error sources):
version: '2'
services:
redis:
image: redis:latest
rq-worker1:
build: .
command: /usr/local/bin/rqworker boo-uploads
depends_on:
- redis
"rq-worker1" is a Python RQ worker, trying to connect to redis via localhost and port 6379, but it fails to establish a connection:
redis_1 | 1:M 23 Dec 13:06:26.285 * The server is now ready to accept connections on port 6379
rq-worker1_1 | [2016-12-23 13:06] DEBUG: worker: Registering birth of worker d5cb16062fc0.1
rq-worker1_1 | Error 111 connecting to localhost:6379. Connection refused.
galileoqueue_rq-worker1_1 exited with code 1
The output of docker ps looks like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36cac91670d2 redis:latest "docker-entrypoint.sh" 14 minutes ago Up About a minute 6379/tcp galileoqueue_redis_1
I tried everything from running the RQ worker against the local IPs 0.0.0.0 / 127.0.0.1 and even localhost. Other solutions posted on Stackoverflow didn't work for me, too (docker-compose: connection refused between containers, but service accessible from host e.g.).
And this is my docker info output:
Containers: 25
Running: 1
Paused: 0
Stopped: 24
Images: 485
Server Version: 1.12.5
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 436
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 13.61 GiB
Name: gp-pc-201
ID: LBGV:K26G:UXXI:BWRH:OYVE:OQTA:N7LQ:I4DV:BTNH:FZEW:7XDD:WOCU
Does anyone have an idea why the connect between the two containers doesn't work?
| In your code localhost from rq-worker1 is rq-worker1 itself, not redis and you can't reach redis:6379 by connect to localhost from rq-worker1. But by default redis and rq-worker1 are in the same network and you can use service name as a domain name in that network.
It means, that you can connect to redis service from rq-worker1 using redis as a domain name, for instance: client.connect(("redis", 6379))
You should replace localhost with redis in config of rq-worker1.
| Redis | 41,302,791 | 22 |
My application currently use Spring Session together with Redis as the backend.
I searched into the official documentation for Spring Session but was not able to find what the default session timeout is when using that module. Also I am not sure how to change that default timeout if necessary.
Can someone please advise?
| The easiest way to configure session timeout when using redis repository is
@EnableRedisHttpSession(maxInactiveIntervalInSeconds = 60)
OR @EnableRedissonHttpSession(maxInactiveIntervalInSeconds = 1200) if redisson dependency is there.
The session expires when it is no longer available in the repository.
Timeout can be configured with setDefaultMaxInactiveInterval(int) on both RedisOperationsSessionRepository and MapSessionRepository. Default value is 30 minutes.
If you are using spring boot, then as of version 1.3 it will automatically sync the value with the server.session.timeout property from the application configuration.
Note that one of the shortcomings when using spring session is that javax.servlet.http.HttpSessionListeners are not invoked.
If you need to react on session expiration events you can subscribe to SessionDestroyedEvent application event of your spring application.
| Redis | 32,501,541 | 22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.