question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I am using PostgreSQL via the Ruby gem 'sequel'.
I'm trying to round to two decimal places.
Here's my code:
SELECT ROUND(AVG(some_column),2)
FROM table
I get the following error:
PG::Error: ERROR: function round(double precision, integer) does
not exist (Sequel::DatabaseError)
I get no error when I run the following code:
SELECT ROUND(AVG(some_column))
FROM table
Does anyone know what I am doing wrong?
| PostgreSQL does not define round(double precision, integer). For reasons @Mike Sherrill 'Cat Recall' explains in the comments, the version of round that takes a precision is only available for numeric.
regress=> SELECT round( float8 '3.1415927', 2 );
ERROR: function round(double precision, integer) does not exist
regress=> \df *round*
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+--------+------------------+---------------------+--------
pg_catalog | dround | double precision | double precision | normal
pg_catalog | round | double precision | double precision | normal
pg_catalog | round | numeric | numeric | normal
pg_catalog | round | numeric | numeric, integer | normal
(4 rows)
regress=> SELECT round( CAST(float8 '3.1415927' as numeric), 2);
round
-------
3.14
(1 row)
(In the above, note that float8 is just a shorthand alias for double precision. You can see that PostgreSQL is expanding it in the output).
You must cast the value to be rounded to numeric to use the two-argument form of round. Just append ::numeric for the shorthand cast, like round(val::numeric,2).
If you're formatting for display to the user, don't use round. Use to_char (see: data type formatting functions in the manual), which lets you specify a format and gives you a text result that isn't affected by whatever weirdness your client language might do with numeric values. For example:
regress=> SELECT to_char(float8 '3.1415927', 'FM999999999.00');
to_char
---------------
3.14
(1 row)
to_char will round numbers for you as part of formatting. The FM prefix tells to_char that you don't want any padding with leading spaces.
| PostgreSQL | 13,113,096 | 345 |
How can I list all the tables of a PostgreSQL database and order them by size?
| select
table_name,
pg_size_pretty(pg_total_relation_size(quote_ident(table_name))),
pg_total_relation_size(quote_ident(table_name))
from information_schema.tables
where table_schema = 'public'
order by 3 desc;
This shows you the size of all tables in the schema public if you have multiple schemas, you might want to use:
select table_schema, table_name, pg_relation_size('"'||table_schema||'"."'||table_name||'"')
from information_schema.tables
order by 3
SQLFiddle example: http://sqlfiddle.com/#!15/13157/3
List of all object size functions in the manual.
| PostgreSQL | 21,738,408 | 344 |
I want to run this query:
SELECT DISTINCT ON (address_id) purchases.address_id, purchases.*
FROM purchases
WHERE purchases.product_id = 1
ORDER BY purchases.purchased_at DESC
But I get this error:
PG::Error: ERROR: SELECT DISTINCT ON expressions must match initial ORDER BY expressions
Adding address_id as first ORDER BY expression silences the error, but I really don't want to add sorting over address_id. Is it possible to do without ordering by address_id?
| Documentation says:
DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. [...] Note that the "first row" of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first. [...] The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s).
Official documentation
So you'll have to add the address_id to the order by.
Alternatively, if you're looking for the full row that contains the most recent purchased product for each address_id and that result sorted by purchased_at then you're trying to solve a greatest N per group problem which can be solved by the following approaches:
The general solution that should work in most DBMSs:
SELECT t1.* FROM purchases t1
JOIN (
SELECT address_id, max(purchased_at) max_purchased_at
FROM purchases
WHERE product_id = 1
GROUP BY address_id
) t2
ON t1.address_id = t2.address_id AND t1.purchased_at = t2.max_purchased_at
ORDER BY t1.purchased_at DESC
A more PostgreSQL-oriented solution based on @hkf's answer:
SELECT * FROM (
SELECT DISTINCT ON (address_id) *
FROM purchases
WHERE product_id = 1
ORDER BY address_id, purchased_at DESC
) t
ORDER BY purchased_at DESC
Problem clarified, extended and solved here: Selecting rows ordered by some column and distinct on another
| PostgreSQL | 9,795,660 | 343 |
I would like to give a user all the permissions on a database without making it an admin.
The reason why I want to do that is that at the moment DEV and PROD are different DBs on the same cluster so I don't want a user to be able to change production objects but it must be able to change objects on DEV.
I tried:
grant ALL on database MY_DB to group MY_GROUP;
but it doesn't seem to give any permission.
Then I tried:
grant all privileges on schema MY_SCHEMA to group MY_GROUP;
and it seems to give me permission to create objects but not to query\delete objects on that schema that belong to other users
I could go on by giving USAGE permission to the user on MY_SCHEMA but then it would complain about not having permissions on the table ...
So I guess my question is: is there any easy way of giving all the permissions to a user on a DB?
I'm working on PostgreSQL 8.1.23.
| All commands must be executed while connected to the right database cluster. Make sure of it.
Roles are objects of the database cluster. All databases of the same cluster share the set of defined roles. Privileges are granted / revoked per database / schema / table etc.
A role needs access to the database, obviously. That's granted to PUBLIC by default. Else:
GRANT CONNECT ON DATABASE my_db TO my_user;
Basic privileges for Postgres 14 or later
Postgres 14 adds the predefined, non-login roles pg_read_all_data / pg_write_all_data.
They have SELECT / INSERT, UPDATE, DELETE privileges for all tables, views, and sequences. Plus USAGE on schemas. We can GRANT membership in these roles:
GRANT pg_read_all_data TO my_user;
GRANT pg_write_all_data TO my_user;
This covers all basic DML commands (but not DDL, and not some special commands like TRUNCATE or the EXECUTE privilege for functions!). The manual:
pg_read_all_data
Read all data (tables, views, sequences), as if having SELECT rights
on those objects, and USAGE rights on all schemas, even without
having it explicitly. This role does not have the role attribute
BYPASSRLS set. If RLS is being used, an administrator may wish to
set BYPASSRLS on roles which this role is GRANTed to.
pg_write_all_data
Write all data (tables, views, sequences), as if having INSERT,
UPDATE, and DELETE rights on those objects, and USAGE rights on
all schemas, even without having it explicitly. This role does not
have the role attribute BYPASSRLS set. If RLS is being used, an
administrator may wish to set BYPASSRLS on roles which this role is
GRANTed to.
All privileges without using predefined roles (any Postgres version)
Commands must be executed while connected to the right database. Make sure of it.
The role needs (at least) the USAGE privilege on the schema. Again, if that's granted to PUBLIC, you are covered. Else:
GRANT USAGE ON SCHEMA public TO my_user;
To also allow the creation of objects, the role needs the CREATE privilege. With Postgres 15, security has been tightened and that privilege on the default schema public is not granted to PUBLIC any more. You might want that, too. Or just grant ALL to your role:
GRANT ALL ON SCHEMA public TO my_user;
Or grant USAGE / CREATE / ALL on all custom schemas:
DO
$$
BEGIN
-- RAISE NOTICE '%', ( -- use instead of EXECUTE to see generated commands
EXECUTE (
SELECT string_agg(format('GRANT USAGE ON SCHEMA %I TO my_user', nspname), '; ')
FROM pg_namespace
-- SELECT string_agg(format('GRANT ALL ON SCHEMA %I TO my_user', nspname), '; ')
WHERE nspname <> 'information_schema' -- exclude information schema and ...
AND nspname NOT LIKE 'pg\_%' -- ... system schemas
);
END
$$;
Then all permissions for all tables. And don't forget sequences (if any), which are used for legacy serial columns.
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO my_user;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO my_user;
Since Postgres 10, IDENTITY columns can replace serial columns, and those don't need separate privileges for the involved sequence. See:
Auto increment table column
Alternatively, you could use the "Grant Wizard" of pgAdmin 4 to work with a GUI.
This covers privileges for existing objects. To also cover future objects, set DEFAULT PRIVILEGES. See:
Grant privileges for a particular database in PostgreSQL
How to manage DEFAULT PRIVILEGES for USERs on a DATABASE vs SCHEMA?
There are some other objects, the manual for GRANT has the complete list. As of Postgres 14:
privileges on a database object (table, column, view, foreign table, sequence, database, foreign-data wrapper, foreign server, function, procedure, procedural language, schema, or tablespace)
But the rest is rarely needed. More details:
Grant privileges for a particular database in PostgreSQL
How to grant all privileges on views to arbitrary user
Consider upgrading to a current version.
| PostgreSQL | 22,483,555 | 342 |
Every time I run my rails 4.0 server, I get this output.
Started GET "/" for 127.0.0.1 at 2013-11-06 23:56:36 -0500
PG::ConnectionBad - could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (fe80::1) and accepting
TCP/IP connections on port 5432?
:
activerecord (4.0.0) lib/active_record/connection_adapters/postgresql_adapter.rb:825:in `connect'
activerecord (4.0.0) lib/active_record/connection_adapters/postgresql_adapter.rb:542:in `initialize'
activerecord (4.0.0) lib/active_record/connection_adapters/postgresql_adapter.rb:41:in `postgresql_connection'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:440:in `new_connection'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:450:in `checkout_new_connection'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:421:in `acquire_connection'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:356:in `block in checkout'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:355:in `checkout'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:265:in `block in connection'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:264:in `connection'
activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:546:in `retrieve_connection'
activerecord (4.0.0) lib/active_record/connection_handling.rb:79:in `retrieve_connection'
activerecord (4.0.0) lib/active_record/connection_handling.rb:53:in `connection'
activerecord (4.0.0) lib/active_record/migration.rb:792:in `current_version'
activerecord (4.0.0) lib/active_record/migration.rb:800:in `needs_migration?'
activerecord (4.0.0) lib/active_record/migration.rb:379:in `check_pending!'
activerecord (4.0.0) lib/active_record/migration.rb:366:in `call'
actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call'
activesupport (4.0.0) lib/active_support/callbacks.rb:373:in `_run__1613334440513032208__call__callbacks'
activesupport (4.0.0) lib/active_support/callbacks.rb:80:in `run_callbacks'
actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:27:in `call'
actionpack (4.0.0) lib/action_dispatch/middleware/reloader.rb:64:in `call'
actionpack (4.0.0) lib/action_dispatch/middleware/remote_ip.rb:76:in `call'
better_errors (0.9.0) lib/better_errors/middleware.rb:84:in `protected_app_call'
better_errors (0.9.0) lib/better_errors/middleware.rb:79:in `better_errors_call'
better_errors (0.9.0) lib/better_errors/middleware.rb:56:in `call'
actionpack (4.0.0) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call'
actionpack (4.0.0) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call'
railties (4.0.0) lib/rails/rack/logger.rb:38:in `call_app'
railties (4.0.0) lib/rails/rack/logger.rb:21:in `block in call'
activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `block in tagged'
activesupport (4.0.0) lib/active_support/tagged_logging.rb:25:in `tagged'
activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `tagged'
railties (4.0.0) lib/rails/rack/logger.rb:21:in `call'
quiet_assets (1.0.2) lib/quiet_assets.rb:18:in `call_with_quiet_assets'
actionpack (4.0.0) lib/action_dispatch/middleware/request_id.rb:21:in `call'
rack (1.5.2) lib/rack/methodoverride.rb:21:in `call'
rack (1.5.2) lib/rack/runtime.rb:17:in `call'
activesupport (4.0.0) lib/active_support/cache/strategy/local_cache.rb:83:in `call'
rack (1.5.2) lib/rack/lock.rb:17:in `call'
actionpack (4.0.0) lib/action_dispatch/middleware/static.rb:64:in `call'
railties (4.0.0) lib/rails/engine.rb:511:in `call'
railties (4.0.0) lib/rails/application.rb:97:in `call'
rack (1.5.2) lib/rack/content_length.rb:14:in `call'
thin (1.5.1) lib/thin/connection.rb:81:in `block in pre_process'
thin (1.5.1) lib/thin/connection.rb:79:in `pre_process'
thin (1.5.1) lib/thin/connection.rb:54:in `process'
thin (1.5.1) lib/thin/connection.rb:39:in `receive_data'
eventmachine (1.0.3) lib/eventmachine.rb:187:in `run'
thin (1.5.1) lib/thin/backends/base.rb:63:in `start'
thin (1.5.1) lib/thin/server.rb:159:in `start'
rack (1.5.2) lib/rack/handler/thin.rb:16:in `run'
rack (1.5.2) lib/rack/server.rb:264:in `start'
railties (4.0.0) lib/rails/commands/server.rb:84:in `start'
railties (4.0.0) lib/rails/commands.rb:78:in `block in <top (required)>'
railties (4.0.0) lib/rails/commands.rb:73:in `<top (required)>'
bin/rails:4:in `<main>'
I'm running Mavericks OS X 10.9 so I don't know if that's the problem. I've tried everything I could but nothing seems to work. I've uninstalled and install both postgres and the pg gem multiple times now.
This is my database.yml file
development:
adapter: postgresql
encoding: unicode
database: metals-directory_development
pool: 5
username:
password:
template: template0
host: localhost
port: 5432
test: &test
adapter: postgresql
encoding: unicode
database: metals-directory_test
pool: 5
username:
password:
template: template0
host: localhost
port: 5432
staging:
adapter: postgresql
encoding: unicode
database: metals-directory_production
pool: 5
username:
password:
template: template0
host: localhost
production:
adapter: postgresql
encoding: unicode
database: metals-directory_production
pool: 5
username:
password:
template: template0
host: localhost
cucumber:
<<: *test
| It could be as simple as a stale PID file. It could be failing silently because your computer didn't complete the shutdown process completely which means postgres didn't delete the PID (process id) file.
The PID file is used by postgres to make sure only one instance of the server is running at a time. So when it goes to start again, it fails because there is already a PID file which tells postgres that another instance of the server was started (even though it isn't running, it just didn't get to shutdown and delete the PID).
To fix it remove/rename the PID file. Find the postgres data directory. On macOS using homebrew it is in /usr/local/var/postgres/,
or /usr/local/var/log/ other systems it might be /usr/var/postgres/. On M1, it might be /opt/homebrew/var/postgresql.
To make sure this is the problem, look at the log file (server.log). On the last lines you will see:
FATAL: lock file "postmaster.pid" already exists
HINT: Is another postmaster (PID 347) running in data directory "/usr/local/var/postgres"?
If so, rm postmaster.pid
Restart your server. On a mac using launchctl (with homebrew) the following commands will restart the server.
brew services restart postgresql
OR on older versions of Brew
launchctl unload homebrew.mxcl.postgresql.plist
launchctl load -w homebrew.mxcl.postgresql.plist
| PostgreSQL | 19,828,385 | 339 |
I have been facing a strange scenario when comparing dates in postgresql(version 9.2.4 in windows).
I have a column in my table say update_date with type timestamp without timezone. Client can search over this field with only date (e.g: 2013-05-03) or date with time (e.g.: 2013-05-03 12:20:00).
This column has the value as timestamp for all rows currently and have the same date part 2013-05-03, but difference in time part.
When I'm comparing over this column, I'm getting different results. Like the followings:
select * from table where update_date >= '2013-05-03' AND update_date <= '2013-05-03' -> No results
select * from table where update_date >= '2013-05-03' AND update_date < '2013-05-03' -> No results
select * from table where update_date >= '2013-05-03' AND update_date <= '2013-05-04' -> results found
select * from table where update_date >= '2013-05-03' -> results found
My question is how can I make the first query possible to get results, I mean why the 3rd query is working but not the first one?
| @Nicolai is correct about casting and why the condition is false for any data. i guess you prefer the first form because you want to avoid date manipulation on the input string, correct? you don't need to be afraid:
SELECT *
FROM table
WHERE update_date >= '2013-05-03'::date
AND update_date < ('2013-05-03'::date + '1 day'::interval);
| PostgreSQL | 19,469,154 | 335 |
I'm doing a web app, and I need to make a branch for some major changes, the thing is, these changes require changes to the database schema, so I'd like to put the entire database under git as well.
How do I do that? is there a specific folder that I can keep under a git repository? How do I know which one? How can I be sure that I'm putting the right folder?
I need to be sure, because these changes are not backward compatible; I can't afford to screw up.
The database in my case is PostgreSQL
Edit:
Someone suggested taking backups and putting the backup file under version control instead of the database. To be honest, I find that really hard to swallow.
There has to be a better way.
Update:
OK, so there' no better way, but I'm still not quite convinced, so I will change the question a bit:
I'd like to put the entire database under version control, what database engine can I use so that I can put the actual database under version control instead of its dump?
Would sqlite be git-friendly?
Since this is only the development environment, I can choose whatever database I want.
Edit2:
What I really want is not to track my development history, but to be able to switch from my "new radical changes" branch to the "current stable branch" and be able for instance to fix some bugs/issues, etc, with the current stable branch. Such that when I switch branches, the database auto-magically becomes compatible with the branch I'm currently on.
I don't really care much about the actual data.
| Take a database dump, and version control that instead. This way it is a flat text file.
Personally I suggest that you keep both a data dump, and a schema dump. This way using diff it becomes fairly easy to see what changed in the schema from revision to revision.
If you are making big changes, you should have a secondary database that you make the new schema changes to and not touch the old one since as you said you are making a branch.
| PostgreSQL | 846,659 | 332 |
I ran following sql script on my database:
create table cities (
id serial primary key,
name text not null
);
create table reports (
id serial primary key,
cityid integer not null references cities(id),
reportdate date not null,
reporttext text not null
);
create user www with password 'www';
grant select on cities to www;
grant insert on cities to www;
grant delete on cities to www;
grant select on reports to www;
grant insert on reports to www;
grant delete on reports to www;
grant select on cities_id_seq to www;
grant insert on cities_id_seq to www;
grant delete on cities_id_seq to www;
grant select on reports_id_seq to www;
grant insert on reports_id_seq to www;
grant delete on reports_id_seq to www;
When, as the user www, trying to:
insert into cities (name) values ('London');
I get the following error:
ERROR: permission denied for sequence cities_id_seq
I get that the problem lies with the serial type. That's why I grant select, insert and delete rights for the *_id_seq to www. Yet this does not fix my problem. What am I missing?
| Since PostgreSQL 8.2 you have to use:
GRANT USAGE, SELECT ON SEQUENCE cities_id_seq TO www;
GRANT USAGE - For sequences, this privilege allows the use of the currval and nextval functions.
Also as pointed out by @epic_fil in the comments you can grant permissions to all the sequences in the schema with:
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO www;
Note: Don't forget to choose the database (\c <database_name>) before executing the privilege grant commands
| PostgreSQL | 9,325,017 | 331 |
I'm building a Django site and I am looking for a search engine.
A few candidates:
Lucene/Lucene with Compass/Solr
Sphinx
Postgresql built-in full text search
MySQl built-in full text search
Selection criteria:
result relevance and ranking
searching and indexing speed
ease of use and ease of integration with Django
resource requirements - site will be hosted on a VPS, so ideally the search engine wouldn't require a lot of RAM and CPU
scalability
extra features such as "did you mean?", related searches, etc
Anyone who has had experience with the search engines above, or other engines not in the list -- I would love to hear your opinions.
EDIT: As for indexing needs, as users keep entering data into the site, those data would need to be indexed continuously. It doesn't have to be real time, but ideally new data would show up in index with no more than 15 - 30 minutes delay
| Good to see someone's chimed in about Lucene - because I've no idea about that.
Sphinx, on the other hand, I know quite well, so let's see if I can be of some help.
Result relevance ranking is the default. You can set up your own sorting should you wish, and give specific fields higher weightings.
Indexing speed is super-fast, because it talks directly to the database. Any slowness will come from complex SQL queries and un-indexed foreign keys and other such problems. I've never noticed any slowness in searching either.
I'm a Rails guy, so I've no idea how easy it is to implement with Django. There is a Python API that comes with the Sphinx source though.
The search service daemon (searchd) is pretty low on memory usage - and you can set limits on how much memory the indexer process uses too.
Scalability is where my knowledge is more sketchy - but it's easy enough to copy index files to multiple machines and run several searchd daemons. The general impression I get from others though is that it's pretty damn good under high load, so scaling it out across multiple machines isn't something that needs to be dealt with.
There's no support for 'did-you-mean', etc - although these can be done with other tools easily enough. Sphinx does stem words though using dictionaries, so 'driving' and 'drive' (for example) would be considered the same in searches.
Sphinx doesn't allow partial index updates for field data though. The common approach to this is to maintain a delta index with all the recent changes, and re-index this after every change (and those new results appear within a second or two). Because of the small amount of data, this can take a matter of seconds. You will still need to re-index the main dataset regularly though (although how regularly depends on the volatility of your data - every day? every hour?). The fast indexing speeds keep this all pretty painless though.
I've no idea how applicable to your situation this is, but Evan Weaver compared a few of the common Rails search options (Sphinx, Ferret (a port of Lucene for Ruby) and Solr), running some benchmarks. Could be useful, I guess.
I've not plumbed the depths of MySQL's full-text search, but I know it doesn't compete speed-wise nor feature-wise with Sphinx, Lucene or Solr.
| PostgreSQL | 737,275 | 329 |
What is best way to check if value is null or empty string in Postgres sql statements?
Value can be long expression so it is preferable that it is written only once in check.
Currently I'm using:
coalesce( trim(stringexpression),'')=''
But it looks a bit ugly.
stringexpression may be char(n) column or expression containing char(n) columns with trailing spaces.
What is best way?
| The expression stringexpression = '' yields:
true .. for '' (or for any string consisting of only spaces with the data type char(n))
null .. for null
false .. for anything else
"stringexpression is either null or empty"
To check for this, use:
(stringexpression = '') IS NOT FALSE
Or the reverse approach (may be easier to read):
(stringexpression <> '') IS NOT TRUE
Works for any character type including char(n).
The manual about comparison operators.
Or use your original expression without trim(), which would be costly noise for char(n) (see below), or incorrect for other character types: strings consisting of only spaces would pass as empty string.
coalesce(stringexpression, '') = ''
But the expressions at the top are faster.
"stringexpression is neither null nor empty"
Asserting the opposite is simpler:
stringexpression <> ''
Either way, document your exact intention in an added comment if there is room for ambiguity.
About char(n)
The data type char(n) is short for character(n).
char / character are short for char(1) / character(1).
bpchar is an internal alias of character. (Think "blank-padded character".)
This data type is supported for historical reasons and for compatibility with the SQL standard, but its use is discouraged in Postgres:
In most situations text or character varying should be used instead.
Do not confuse char(n) with other, useful, character types varchar(n), varchar, text or "char" (with double-quotes).
In char(n) an empty string is not different from any other string consisting of only spaces. All of these are folded to n spaces in char(n) per definition of the type. It follows logically that the above expressions work for char(n) as well - just as much as these (which wouldn't work for other character types):
coalesce(stringexpression, ' ') = ' '
coalesce(stringexpression, '') = ' '
Demo
Empty string equals any string of spaces when cast to char(n):
SELECT ''::char(5) = ''::char(5) AS eq1
, ''::char(5) = ' '::char(5) AS eq2
, ''::char(5) = ' '::char(5) AS eq3;
Result:
eq1 | eq2 | eq3
----+-----+----
t | t | t
Test for "null or empty string" with char(n):
SELECT stringexpression
, stringexpression = '' AS base_test
, (stringexpression = '') IS NOT FALSE AS test1
, (stringexpression <> '') IS NOT TRUE AS test2
, coalesce(stringexpression, '') = '' AS coalesce1
, coalesce(stringexpression, ' ') = ' ' AS coalesce2
, coalesce(stringexpression, '') = ' ' AS coalesce3
FROM (
VALUES
('foo'::char(5))
, ('')
, (' ') -- not different from '' in char(n)
, (null)
) sub(stringexpression);
Result:
stringexpression | base_test | test1 | test2 | coalesce1 | coalesce2 | coalesce3
------------------+-----------+-------+-------+-----------+-----------+-----------
foo | f | f | f | f | f | f
| t | t | t | t | t | t
| t | t | t | t | t | t
null | null | t | t | t | t | t
Test for "null or empty string" with text:
SELECT stringexpression
, stringexpression = '' AS base_test
, (stringexpression = '') IS NOT FALSE AS test1
, (stringexpression <> '') IS NOT TRUE AS test2
, coalesce(stringexpression, '') = '' AS coalesce1
, coalesce(stringexpression, ' ') = ' ' AS coalesce2
, coalesce(stringexpression, '') = ' ' AS coalesce3
FROM (
VALUES
('foo'::text)
, ('')
, (' ') -- different from '' in sane character types
, (null)
) sub(stringexpression);
Result:
stringexpression | base_test | test1 | test2 | coalesce1 | coalesce2 | coalesce3
------------------+-----------+-------+-------+-----------+-----------+-----------
foo | f | f | f | f | f | f
| t | t | t | t | f | f
| f | f | f | f | f | f
null | null | t | t | t | t | f
db<>fiddle here
Old sqlfiddle
Related:
Any downsides of using data type "text" for storing strings?
| PostgreSQL | 23,766,084 | 329 |
Is there a way using SQL to list all foreign keys for a given table? I know the table name / schema and I can plug that in.
| You can do this via the information_schema tables. For example:
SELECT
tc.table_schema,
tc.constraint_name,
tc.table_name,
kcu.column_name,
ccu.table_schema AS foreign_table_schema,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY'
AND tc.table_schema='myschema'
AND tc.table_name='mytable';
If you need to go the other way, i.e., find all places a table is used as a foreign table, you can replace the last two conditions with:
AND ccu.table_schema='myschema'
AND ccu.table_name='mytable';
| PostgreSQL | 1,152,260 | 329 |
Since PostgreSQL came out with the ability to do LATERAL joins, I've been reading up on it since I currently do complex data dumps for my team with lots of inefficient subqueries that make the overall query take four minutes or more.
I understand that LATERAL joins may be able to help me, but even after reading articles like this one from Heap Analytics, I still don't quite follow.
What is the use case for a LATERAL join? What is the difference between a LATERAL join and a subquery?
| What is a LATERAL join?
The feature was introduced with PostgreSQL 9.3. The manual:
Subqueries appearing in FROM can be preceded by the key word
LATERAL. This allows them to reference columns provided by preceding
FROM items. (Without LATERAL, each subquery is evaluated
independently and so cannot cross-reference any other FROM item.)
Table functions appearing in FROM can also be preceded by the key
word LATERAL, but for functions the key word is optional; the
function's arguments can contain references to columns provided by
preceding FROM items in any case.
Basic code examples are given there.
More like a correlated subquery
A LATERAL join is more like a correlated subquery, not a plain subquery, in that expressions to the right of a LATERAL join are evaluated once for each row left of it - just like a correlated subquery - while a plain subquery (table expression) is evaluated once only. (The query planner has ways to optimize performance for either, though.)
Related answer with code examples for both side by side, solving the same problem:
Optimize GROUP BY query to retrieve latest row per user
For returning more than one column, a LATERAL join is typically simpler, cleaner and faster.
Also, remember that the equivalent of a correlated subquery is LEFT JOIN LATERAL ... ON true:
Call a set-returning function with an array argument multiple times
Things a subquery can't do
There are things that a LATERAL join can do, but a (correlated) subquery cannot (easily). A correlated subquery can only return a single value, not multiple columns and not multiple rows - with the exception of bare function calls (which multiply result rows if they return multiple rows). But even certain set‑returning functions are only allowed in the FROM clause. Like unnest() with multiple parameters in Postgres 9.4 or later. The manual:
This is only allowed in the FROM clause;
So this works, but cannot (easily) be replaced with a subquery:
CREATE TABLE tbl (a1 int[], a2 int[]);
SELECT * FROM tbl, unnest(a1, a2) u(elem1, elem2); -- implicit LATERAL
The comma (,) in the FROM clause is short notation for CROSS JOIN.
LATERAL is assumed automatically for table functions.
About the special case of UNNEST( array_expression [, ... ] ):
How do you declare a set-returning-function to only be allowed in the FROM clause?
Set-returning functions in the SELECT list
You can also use set-returning functions like unnest() in the SELECT list directly. This used to exhibit surprising behavior with more than one such function in the same SELECT list up to Postgres 9.6. But it has finally been sanitized with Postgres 10 and is a valid alternative now (even if not standard SQL). See:
What is the expected behaviour for multiple set-returning functions in SELECT clause?
Building on above example:
SELECT *, unnest(a1) AS elem1, unnest(a2) AS elem2
FROM tbl;
Comparison:
fiddle for pg 9.6
fiddle for pg 10
To note: a (combination of) set-returning function(s) in the SELECT list that produces no rows eliminates the row. Internally it translates to CROSS JOIN LATERAL ROWS FROM ..., not to LEFT JOIN LATERAL ... ON true!
fiddle for pg 16 demonstrating the difference.
Clarify misinformation
The manual:
For the INNER and OUTER join types, a join condition must be
specified, namely exactly one of NATURAL, ON join_condition,
or USING (join_column [, ...]). See below for the meaning.
For CROSS JOIN, none of these clauses can appear.
So these two queries are valid (even if not particularly useful):
SELECT *
FROM tbl t
LEFT JOIN LATERAL (SELECT * FROM b WHERE b.t_id = t.t_id) t ON true;
SELECT *
FROM tbl t, LATERAL (SELECT * FROM b WHERE b.t_id = t.t_id) t;
While this one is not:
SELECT *
FROM tbl t
LEFT JOIN LATERAL (SELECT * FROM b WHERE b.t_id = t.t_id) t;
That's why Andomar's code example is correct (the CROSS JOIN does not require a join condition) and Attila's is was not.
| PostgreSQL | 28,550,679 | 324 |
I have a Postgresql database on which I want to do a few cascading deletes. However, the tables aren't set up with the ON DELETE CASCADE rule. Is there any way I can perform a delete and tell Postgresql to cascade it just this once? Something equivalent to
DELETE FROM some_table CASCADE;
The answers to this older question make it seem like no such solution exists, but I figured I'd ask this question explicitly just to be sure.
| No. To do it just once you would simply write the delete statement for the table you want to cascade.
DELETE FROM some_child_table WHERE some_fk_field IN (SELECT some_id FROM some_Table);
DELETE FROM some_table;
| PostgreSQL | 129,265 | 322 |
I'm trying to remove a key from a RethinkDB document.
My approaches (which didn't work):
r.db('db').table('user').replace(function(row){delete row["key"]; return row})
Other approach:
r.db('db').table('user').update({key: null})
This one just sets row.key = null (which looks reasonable).
Examples tested on rethinkdb data explorer through web UI.
| Here's the relevant example from the documentation on RethinkDB's website: http://rethinkdb.com/docs/cookbook/python/#removing-a-field-from-a-document
To remove a field from all documents in a table, you need to use replace to update the document to not include the desired field (using without):
r.db('db').table('user').replace(r.row.without('key'))
To remove the field from one specific document in the table:
r.db('db').table('user').get('id').replace(r.row.without('key'))
You can change the selection of documents to update by using any of the selectors in the API (http://rethinkdb.com/api/), e.g. db, table, get, get_all, between, filter.
| RethinkDB | 18,580,397 | 47 |
I'm developing an application that works distributed, and I have a SQLite database that must be shared between distributed servers.
If I'm in serverA, and change sqlite row, this change must be in the other servers instantly, but if a server were offline and then it came online, it must update all info equal other servers.
I'm trying to develop a HA service with small SQLite databases.
I'm thinking on something like MongoDB or ReThinkDB, due to replication works fine and I have got data independently server online I had.
There are a library or other SQL methodology to share data between servers?
| I used the Raft consensus protocol to replicate my SQLite database. You can find the system here:
https://github.com/rqlite/rqlite
| RethinkDB | 16,032,825 | 45 |
This is my official first question here; I welcome any/all criticism of my post so that I can learn how to be a better SO citizen.
I am vetting non-relational DBMS for storing potentially large email opt-out lists, leaning toward either MongoDB or RethinkDB, using their respective Python client libraries. The pain point of my application is bulk insert performance, so I have set up two Python scripts to insert 20,000 records in batches of 5,000 into both a MongoDB and a RethinkDB collection.
The MongoDB python script mongo_insert_test.py:
NUM_LINES = 20000
BATCH_SIZE = 5000
def insert_records():
collection = mongo.recips
i = 0
batch_counter = 0
batch = []
while i <= NUM_LINES:
i += 1
recip = {
'address': "test%d@test%d.com" % (i, i)
}
if batch_counter <= BATCH_SIZE:
batch.append(recip)
batch_counter += 1
if (batch_counter == BATCH_SIZE) or i == NUM_LINES:
collection.insert(batch)
batch_counter = 0
batch = []
if __name__ == '__main__':
insert_records()
The almost identical RethinkDB python script rethink_insert_test.py:
NUM_LINES = 20000
BATCH_SIZE = 5000
def insert_records():
i = 0
batch_counter = 0
batch = []
while i <= NUM_LINES:
i += 1
recip = {
'address': "test%d@test%d.com" % (i, i)
}
if batch_counter <= BATCH_SIZE:
batch.append(recip)
batch_counter += 1
if (batch_counter == BATCH_SIZE) or i == NUM_LINES:
r.table('recip').insert(batch).run()
batch_counter = 0
batch = []
if __name__ == '__main__':
insert_records()
In my dev environment, the MongoDB script inserts 20,000 records in under a second:
$ time python mongo_insert_test.py
real 0m0.618s
user 0m0.400s
sys 0m0.032s
In the same environment, the RethinkDB script performs much slower, inserting 20,000 records in over 2 minutes:
$ time python rethink_insert_test.py
real 2m2.502s
user 0m3.000s
sys 0m0.052s
Am I missing something huge here with regard to how these two DBMS work? Why is RethinkDB performing so badly with this test?
My dev machine had about 1.2GB available memory for these tests.
| RethinkDB currently implements batch inserts by doing a single insert at a time on the server. Since Rethink flushes every record to disk (because it's designed with safety first in mind), this has a really bad effect on workloads like this one.
We're doing two things to address this:
Bulk inserts will be implemented via a bulk insert algorithm on the server to avoid doing one insert at a time.
We will give you the option to relax durability constraints to allow the cache memory to absorb high-throughput inserts if you'd like (in exchange for not syncing to disk as often).
This will absolutely be fixed in 4-12 weeks (and if you need this ASAP, feel free to shoot me an email to [email protected] and I'll see if we can reprioritize).
Here are the relevant github issues:
https://github.com/rethinkdb/rethinkdb/issues/207
https://github.com/rethinkdb/rethinkdb/issues/314
Hope this helps. Please don't hesitate to ping us if you need help.
| RethinkDB | 15,151,554 | 25 |
One way I know I can do it is by listing throughdbList() and tableList() and then looking for what I want in the results.
Is there an easier way?
EDIT
My goal is to create a table in case it doesn't exist.
| If you want to create a database if it does not exists, or get a value like "database already exists" if it does exist, you could do something like the following:
r.dbList().contains('example_database')
.do(function(databaseExists) {
return r.branch(
databaseExists,
{ dbs_created: 0 },
r.dbCreate('example_database')
);
}).run();
It will return the following if it is created:
{
"config_changes": [
{
"new_val": {
"id": "1ee7ddb4-6e2c-43bb-a0f5-64ef6a6211a8",
"name": "example_database"
},
"old_val": null
}
],
"dbs_created": 1
}
And this if it already exists:
{
"dbs_created": 0
}
| RethinkDB | 31,625,913 | 24 |
I have this object:
{
"id": "eb533cd0-fef1-48bf-9fb8-b66261c9171b" ,
"errors": [
"error1" ,
"error2"
]
}
I simply want to append a new error to errors array. I tried:
r.db('test').table('taskQueue').get("eb533cd0-fef1-48bf-9fb8-b66261c9171b").update({'errors': r.row['errors'].append('appended error')})
but this did not work. It gives this error: "TypeError: r.row.errors is undefined"
My question is how to append an array?
| r.db('test').table('taskQueue').get("eb533cd0-fef1-48bf-9fb8-b66261c9171b").update({
errors: r.row('errors').append('appended error')
})
So not r.row['errors'], but r.row('errors').
| RethinkDB | 22,846,614 | 19 |
Currently I'm using socket.io without RethinkDB like this:
Clients emit events to socket.io, which receives the events, emits to various other clients, and saves to the db for persistence. A new client connecting will get existing data from the db then listen to new events over socket.io.
How would switching to RethinkDB and the changefeed help me here?
The way I see the same working with RethinkDB is the client could do a POST (which inserts into RethinkDB) instead of emitting to socket.io, and then socket.io is watching a RethinkDB changefeed and emitting to all clients when it receives new data.
How is this method using RethinkDB and the changefeed better than my current method? To me they both feel like they accomplish the same thing, but I don't see any obvious advantage in the RethinkDB method, and because I'd be going to the db rather than emitting from socket.io on the server straight away it will surely be a bit slower.
| First, let's clarify the relationship between socket.io and RethinkDB changefeeds. Socket.io is intended for realtime communication between client (the browser) and the server (Node.js). RethinkDB changfeeds are way for your server (Node.js) to listen to changes in the database. The client can't communicate with RethinkDB directly.
A very typical architecture for a realtime app is to have RethinkDB changefeeds subscribe to changes in the database and then use socket.io to pass those changes to the client. The client usually also emits messages which can get written to your database, depending on your application logic.
Yes, you could just emit all messages through socket.io then pass all messages to all clients, and then just write those messages to the database for persistence. It's also true that this is faster, but there are a number of disadvantages to this approach.
1. Database as single source of truth
The easiest problem to spot is the following:
What happens if your app isn't able to write something to the
database?
What happens if the data you're trying to insert into the database is invalid or a duplicate? Do you write application logic to handle this?
What happens if the Node.js server goes down before sending out the
write query?
These are just some quick examples in which, because of your architecture, you will lose or have out-of-sync data. And just to reiterate this, you WILL lose data, because your main source of truth is in-memory. You might also have discrepancies between the data in your Node.js app and your DB.
The point is that the database should always be your single source of truth and you should only acknowledge data when it's written to disk. I'm not sure how anyone would be able to sleep at night otherwise.
2. Advanced Queries
If you just pass all new messages from all clients to all clients through socket.io, you now have to have some pretty complex logic in your client in order to filter out all the data that's actually important. Take into consideration that you're passing a lot of useless data through the network that the client won't actually use.
The alternative is writing a pub/sub system in which you subscribe to certain channels (or something like that) in order to filter out the data that's actually important to the client.
RethinkDB solves this by providing it's very own query language that you can attach to changefeeds. If the client, for example, needs all the users in my users table between the ages of 20 to 30, that live in the state of California, 10 miles from San Francisco, and who have bought a book within the last 6 monhts, this can be expressed in ReQL (RethinkDB's query language) and a changefeed can be setup for that query, so that the client only gets notified when relevant changes. This is much harder to do with just Socket.io and Node.js.
3. Scalability
The last problem that RethinkDB solves is that it's a much more scalable solution to just storing everything in memory (through Socket.io and Node.js). Because RethinkDB is built from the ground up to be distributed, you can have a cluster of 20+ RethinkDB nodes with shards and replicas. Every RethinkDB query you write is distributed by default. Now, you can have 20+ other Node.js nodes that are stateless and are all listening to changfeeds. Because the database is the central source of truth, this is not a problem.
The alternative would be to limit yourself to one server, have some other pub/sub system (built on something like Reddis, for example), have only a single database that you poll... There's probably more examples, but you can see where I'm going with this.
I'd love to hear if this answered your question and if I'm getting where you're coming from. It's a little hard to get how to structure your applications at first, but it really is an elegant solution for most realtime architectures.
| RethinkDB | 30,527,698 | 19 |
Having just arrived to Elixir/Phoenix I want to use RethinkDB instead of PostgreSQL but I only find documentation/examples on PostgreSQL (which seems to be the default/official database). There is a very good package from Hamiltop (Rethinkdb-elixir) but unfortunately the documentation in the Wiki is not ready and in the Readme is not enough for me.
I absolutely don't want to use SQL (I came from using Meteor/MongoDB where database was not as issue).
Can anyone show me a simple example of the code I need to:
Connect to RethinkDB;
Start the server/manage the server/connections;
Create a database/table;
Perform basic CRUD operations.
This could sound silly but as Meteor took care of these for us, now this is an issue for me...because I'm not being able to do it properly.Thanks!
| Step 1) Generate project without ecto:
mix phoenix.new some_app --no-ecto
Step 2) Add rethinkdb as a dependency in mix.exs
defp deps do
[{:phoenix, "~> 0.13.1"},
{:phoenix_html, "~> 1.0"},
{:phoenix_live_reload, "~> 0.4", only: :dev},
{:rethinkdb, "~> 0.0.5"},
{:cowboy, "~> 1.0"}]
end
Step 3) Run mix deps.get
Step 4) Create a database:
defmodule SomeApp.Database do
use RethinkDB.Connection
end
Step 5) Add it to your supervision tree in lib/some_app.ex - the name should match your database module above (SomeApp.Database)
def start(_type, _args) do
import Supervisor.Spec, warn: false
children = [
# Start the endpoint when the application starts
supervisor(SomeApp.Endpoint, []),
worker(RethinkDB.Connection, [[name: SomeApp.Database, host: 'localhost', port: 28015]])
# Here you could define other workers and supervisors as children
]
# See http://elixir-lang.org/docs/stable/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: Rethink.Supervisor]
Supervisor.start_link(children, opts)
end
Step 6) Execute a query:
defmodule Rethink.PageController do
use Rethink.Web, :controller
use RethinkDB.Query
plug :action
def index(conn, _params) do
table_create("people")
|> SomeApp.Database.run
|> IO.inspect
table("people")
|> insert(%{first_name: "John", last_name: "Smith"})
|> SomeApp.Database.run
|> IO.inspect
table("people")
|> SomeApp.Database.run
|> IO.inspect
render conn, "index.html"
end
end
Please note: I have put the queries in the PageController just for the ease of running. In a real example, these would be in separate module - maybe one that represents your resource.
The other thing to note is that I am creating the table inline on the controller. You could execute the command to create a table in a file such as priv/migrations/create_people.exs and run it with mix run priv/migrations/create_people.exs
| RethinkDB | 31,457,945 | 18 |
How to create unique items in RethinkDB?
In MongoDb I used ensureIndex for this, eg:
userCollection.ensureIndex({email:1},{unique:true},function(err, indexName){
| RethinkDB does not currently support uniqueness constraints on fields other than the primary key.
You could use an auxiliary table where the unique field is stored as the primary key in order to check for uniqueness in your application explicitly.
| RethinkDB | 17,789,123 | 17 |
I am building an application with RethinkDB and I'm about to switch to using changefeeds. But I'm facing an architectural choice and I'd like to get some advice.
My application currently loads all user data from several tables on user login (sending all of it to the frontend), and then processes requests from the frontend, altering the database, and preparing and sending changed items to users. I'd like to switch that over to changefeeds. The way I see it, I have two choices:
Set up a single changefeed for each table. Filter by users logged in to a particular server, and distribute the changes to users manually. These changefeeds are never closed, e.g. they have the lifetime of my servers.
When a user logs in, set up an individual changefeed for that user, for that user's data only (using a getAll with a secondary index). Maintain as many changefeeds as there are currently logged in users. Close them when users log out.
Solution #1 has a big disadvantage: RethinkDB changefeeds do not have a concept of time (or version number), like for example Kafka does. This means that there is no way to a) load initial data, and b) get changes that happened since the initial load. There is a time window where changes can be lost: between initial data load (a) and the moment the changefeed is set up (b). I find this worrying.
Solution #2 seems better, because includeInitial can be used to get initial data, and then get subsequent changes without interruption. I'd have to deal with initial load performance (it's faster to load a single dump of all data than process thousands of updates), but it seems more "correct". But what about scaling? I'm planning to handle up to 1k users per server — is RethinkDB prepared to handle thousands of changefeeds, each being essentially a getAll query? The actual activity in these changefeeds will be very low, it's just the number that I'm worried about.
The RethinkDB manual is a bit terse about changefeed scaling, saying that:
Changefeeds perform well as they scale, although they create extra intracluster messages in proportion to the number of servers with open feed connections on each write.
Solution #2 creates many more feeds, but the number of servers with open feed connections is actually the same for both solutions. And "changefeeds perform well as they scale" isn't quite enough to go on :-)
I'd also be interested to know what are recommended practices for handling server restarts/upgrades and disconnections. The way I see it, if anything happens to RethinkDB, clients have to perform a full data load (using includeInitial) after reconnecting, because there is no way to know what changes have been lost during downtime. Is that what people do?
| RethinkDB should be able to handle thousands of changefeeds just fine if it's on reasonable hardware. One thing some people to do lower network load in that case is they put a proxy node on the same machine as their app server, and connect to that, since the proxy node knows enough to deduplicate the changefeed messages coming in over the network, and because it takes a lot of CPU/memory load off of their main cluster.
Currently the only way to recover from a crash is to restart the changefeed using includeInitial. There are plans to add write timestamps in the future, but handling deletes is complicated in that case.
| RethinkDB | 37,510,529 | 13 |
How to make a rethinkdb atomic update if document exists, insert otherwise?
I want to do something like:
var tab = r.db('agflow').table('test');
r.expr([{id: 1, n: 0, x: 11}, {id: 2, n: 0, x: 12}]).forEach(function(row){
var _id = row('id');
return r.branch(
tab.get(_id).eq(null), // 1
tab.insert(row), // 2
tab.get(_id).update(function(row2){return {n: row2('n').add(row('n'))}}) // 3
)})
However this is not fully atomic, because between the time when we check if document exists (1) and inserting it (2) some other thread may insert it.
How to make this query atomic?
| I think the solution is passing
conflict="update"
to the insert method.
Als see RethinkDB documentation on insert
| RethinkDB | 24,306,933 | 12 |
On the api documentation page rethinkdb.com/api/javascript I can only find commands to create, drop and list databases.
But how I can I rename a database in RethinkDB?
| You basically have two options:
1. Update the name using the .config method
You can also update the name using the .config method every database and tables has. This would look something like this:
r
.db("db_name")
.config()
.update({name: "new_db_name"})
2. Update the db_config table
You can also execute a query on the db_config table and just do an update on the db you want to change. This would look something like this:
r
.db('rethinkdb')
.table('db_config')
.filter({ name: 'old_db_name' })
.update({ name: 'new_table_name'})
| RethinkDB | 29,378,739 | 12 |
I am starting with a simple TODO app with Aurelia, RethinkDB & Socket.IO. I seem to have problem with re-rendering or re-evaluating an object that is changed through Socket.IO. So basically, everything works good on the first browser but doesn't get re-rendered in the second browser while displaying the object in the console does show differences in my object. The problem is only on updating an object, it works perfectly on creating/deleting object from the array of todo items.
HTML
<ul>
<li repeat.for="item of items">
<div show.bind="!item.isEditing">
<input type="checkbox" checked.two-way="item.completed" click.delegate="toggleComplete(item)" />
<label class="${item.completed ? 'done': ''} ${item.archived ? 'archived' : ''}" click.delegate="$parent.editBegin(item)">
${item.title}
</label>
<a href="#" click.delegate="$parent.deleteItem(item, $event)"><i class="glyphicon glyphicon-trash"></i></a>
</div>
<div show.bind="item.isEditing">
<form submit.delegate="$parent.editEnd(item)">
<input type="text" value.bind="item.title" blur.delegate="$parent.editEnd(item)" />
</form>
</div>
</li>
</ul>
NodeJS with RethinkDB changefeeds
// attach a RethinkDB changefeeds to watch any changes
r.table(config.table)
.changes()
.run()
.then(function(cursor) {
//cursor.each(console.log);
cursor.each(function(err, item) {
if (!!item && !!item.new_val && item.old_val == null) {
io.sockets.emit("todo_create", item.new_val);
}else if (!!item && !!item.new_val && !!item.old_val) {
io.sockets.emit("todo_update", item.new_val);
}else if(!!item && item.new_val == null && !!item.old_val) {
io.sockets.emit("todo_delete", item.old_val);
}
});
})
.error(function(err){
console.log("Changefeeds Failure: ", err);
});
Aurelia code watching Socket.on
// update item
socket.on("todo_update", data => {
let pos = arrayFindObjectIndex(this.items, 'id', data.id);
if(pos >= 0) {
console.log('before update');
console.log(this.items[pos]);
this.items[pos] = data;
this.items[pos].title = this.items[pos].title + ' [updated]';
console.log('after update');
console.log(this.items[pos]);
}
});
// create item, only add the item if we don't have it already in the items list to avoid dupes
socket.on("todo_create", data => {
if (!_.some(this.items, function (p) {
return p.id === data.id;
})) {
this.items.unshift(data);
}
});
// delete item, only delete item if found in items list
socket.on("todo_delete", data => {
let pos = arrayFindObjectIndex(this.items, 'id', data.id);
if(pos >= 0) {
this.items.splice(pos, 1);
}
});
The socket.on("todo_update", ...){} is not making the second browser re-render but showing the object in the console before/after update does show differences in the object itself. I even changed the todo title property and that too doesn't get re-rendered.
How can I get Aurelia to re-render in my second browser with the new object properties? Don't be too hard on me, I'm learning Aurelia/RethinkDB/NodeJS/Socket.IO all the same time...
| Aurelia observes changes to the contents of an array by overriding the array's mutator methods (push, pop, splice, shift, etc). This works well for most use-cases and performs really well (no dirty-checking, extremely lightweight in terms of memory and cpu). Unfortunately this leaves one way of mutating an array that aurelia can't "see": indexed assignment... eg myArray[6] = 'foo'. Since no array methods were called, the binding system doesn't know the array changed.
In your case, try changing this:
// update item
socket.on("todo_update", data => {
let pos = arrayFindObjectIndex(this.items, 'id', data.id);
if(pos >= 0) {
console.log('before update');
console.log(this.items[pos]);
this.items[pos] = data; // <-- change this to: this.items.splice(pos, 1, data);
this.items[pos].title = this.items[pos].title + ' [updated]';
console.log('after update');
console.log(this.items[pos]);
}
});
| RethinkDB | 36,394,399 | 12 |
I am building the back-end for my web app; it would act as an API for the front-end and it will be written in Python (Flask, to be precise).
After taking some decisions regarding design and implementation, I got to the database part. And I started thinking whether NoSQL data storage may be more appropriate for my project than traditional SQL databases. Following is a basic functionality description which should be handled by the database and then a list of pros and cons I could come up with regarding to which type of storage should I opt for. Finally some words about why I have considered RethinkDB over other NoSQL data storages.
Basic functionality of the API
The API consists of only a few models: Artist, Song, Suggestion, User and UserArtists.
I would like to be able to add a User with some associated data and link some Artists to it. I would like to add Songs to Artists on request, and also generate a Suggestion for a User, which will contain an Artist and a Song.
Maybe one of the most important parts is that Artists will be periodically linked to Users (and also Artists can be removed from the system -- hence from Users too -- if they don't satisfy some criteria). Songs will also be dynamically added to Artists. All this means is that Users don't have a fixed set of Artists and nor do Artists have a fixed set of Songs -- they will be continuously updating.
Pros
for NoSQL:
Flexible schema, since not every Artist will have a FacebookID or Song a SoundcloudID;
While a JSON API, I believe I would benefit from the fact that records are stored as JSON;
I believe the number of Songs, but especially Suggestions will raise quite a bit, hence NoSQL will do a better job here;
for SQL:
It's fixed schema may come in handy with relations between models;
Flask has support for SQLAlchemy which is very helpful in defining models;
Cons
for NoSQL:
Relations are harder to implement and updating models transaction-like involves a bit of code;
Flask doesn't have any wrapper or module to ease things, hence I will need to implement some kind of wrapper to help me make the code more readable while doing database operations;
I don't have any certainty on how should I store my records, especially UserArtists
for SQL:
Operations are bulky, I have to define schemas, check whether columns have defaults, assign defaults, validate data, begin/commit transactions -- I believe it's too much of a hassle for something simple like an API;
Why RethinkDB?
I've considered RehinkDB for a possible implementation of NoSQL for my API because of the following:
It looks simpler and more lightweight than other solutions;
It has native Python support which is a big plus;
It implements table joins and other things which could come in handy in my API, which has some relations between models;
It is rather new, and I see a lot of implication and love from the community. There's also the will to continuously add new things that leverage database interaction.
All these being considered, I would be glad to hear any advice on whether NoSQL or SQL is more appropiate for my needs, as well as any other pro/con on the two, and of course, some corrections on things I haven't stated properly.
| I'm working at RethinkDB, but that's my unbiased answer as a web developer (at least as unbiased as I can).
Flexible schema are nice from a developer point of view (and in your case). Like you said, with something like PostgreSQL you would have to format all the data you pull from third parties (SoundCloud, Facebook etc.). And while it's not something really hard to do, it's not something enjoyable.
Being able to join tables, is for me the natural way of doing things (like for user/userArtist/artist). While you could have a structure where a user would contain artists, it is going to be unpleasant to use when you will need to retrieve artists and for each of them a list of users.
The first point is something common in NoSQL databases, while JOIN operations are more a SQL databases thing.
You can see RethinkDB as something providing the best of each world.
I believe that developing with RethinkDB is easy, fast and enjoyable, and that's what I am looking for as a web developer.
There is however one thing that you may need and that RethinkDB does not deliver, which is transactions. If you need atomic updates on multiple tables (or documents - like if you have to transfer money between users), you are definitively better with something like PostgreSQL. If you just need updates on multiple tables, RethinkDB can handle that.
And like you said, while RethinkDB is new, the community is amazing, and we - at RethinkDB - care a lot about our users.
If you have more questions, I would be happy to answer them : )
| RethinkDB | 20,597,590 | 11 |
I am using testdouble for stubbing calls within my node.js project. This particular function is wrapping a promise and has multiple then calls within the function itself.
function getUser (rethink, username) {
return new Promise((resolve, reject) => {
let r = database.connect();
r.then(conn => database.table(tablename).filter({username}))
.then(data => resolve(data))
.error(err => reject(err));
});
}
So I am wanting to determine if the resolve and reject are handled correctly based on error conditions. Assume there is some custom logic in there that I need to validate.
For my test
import getUser from './user';
import td from 'testdouble';
test(t => {
const db = td.object();
const connect = td.function();
td.when(connect('options')).thenResolve();
const result = getUser(db, 'testuser');
t.verify(result);
}
The issue is that the result of connect needs to be a promise, so I use then resolve with a value which needs to be another promise that resolves or rejects.
The line it is relating to is the result of database.connect() is not a promise.
TypeError: Cannot read property 'then' of undefined
Anyone have success with stubbing this type of call with Test Double?
| So figured out the resolution. There are a few things to note in the solution and that we encountered. In short the resolution ended up being this...
td.when(database.connect()).thenResolve({then: (resolve) => resolve('ok')});
This resolves a thenable that is returned when test double sees database connect. Then subsequent calls can also be added.
There is also a part to note if you send in an object to database.connect() you have to be aware that it is doing === equality checking and you will need to have a reference to that object for it to correctly use td.when.
| RethinkDB | 42,935,880 | 11 |
Attempting to use this example to join on an array of IDs: https://github.com/rethinkdb/rethinkdb/issues/1533#issuecomment-26112118
Stores table snippet
{
"storeID": "80362c86-94cc-4be3-b2b0-2607901804dd",
"locations": [
"5fa96762-f0a9-41f2-a6c1-1335185f193d",
"80362c86-94cc-4be3-b2b0-2607901804dd"
]
}
Locations table snippet
{
"lat": 125.231345,
"lng": 44.23123,
"id": "80362c86-94cc-4be3-b2b0-2607901804dd"
}
I'd like to select the stores and join their store locations.
Original example from ReThinkDB contributor:
r.table("blog_posts")
.concat_map(lambda x: x["comment_ids"].map(lambda y: x.merge("comment_id" : y)))
.eq_join("comment_id", r.table("comments"))
My attempt to convert to JS
r.table("stores")
.concatMap((function(x){
return x("locations").map((function(y){
return x("locations").add(y);
}))
}))
.eqJoin("locations", r.table("locations"))
Result
RqlRuntimeError: Expected type ARRAY but found STRING
| You're using concatMap incorrectly, here's what you want the first part of your query to be.
r.table("stores")
.concatMap(function (x) {
return x("locations");
})
Try running that, it should give you:
["5fa96762-...", "80362c86-...", ...]
Now we need to join this to the other table. To join an array of ids to a table you can use eqjoin like so:
array.eqJoin(function (row) { return row; }, table)
There's more details here: rql get multple documents from list of keys rethinkdb in javascript.
Putting it all together we get:
r.table("stores")
.concatMap(function (x) {
return x("locations")
})
.eqJoin(function (i) { return i; }, r.table("locations"))
To get back the documents from the stores:
r.table("stores")
.concatMap(function (x) {
return x("locations").map(function (loc) {
return x.merge({locations: loc});
});
})
.eqJoin("locations", r.table("locations"))
| RethinkDB | 20,909,723 | 10 |
I'm trying to write the most optimal query to find all of the documents that do not have a specific field. Is there any better way to do this than the examples I have listed below?
// Get the ids of all documents missing "location"
r.db("mydb").table("mytable").filter({location: null},{default: true}).pluck("id")
// Get a count of all documents missing "location"
r.db("mydb").table("mytable").filter({location: null},{default: true}).count()
Right now, these queries take about 300-400ms on a table with ~40k documents, which seems rather slow. Furthermore, in this specific case, the "location" attribute contains latitude/longitude and has a geospatial index.
Is there any way to accomplish this? Thanks!
| A naive suggestion
You could use the hasFields method along with the not method on to filter out unwanted documents:
r.db("mydb").table("mytable")
.filter(function (row) {
return row.hasFields({ location: true }).not()
})
This might or might not be faster, but it's worth trying.
Using a secondary index
Ideally, you'd want a way to make location a secondary index and then use getAll or between since queries using indexes are always faster. A way you could work around that is making all rows in your table have a value false value for their location, if they don't have a location. Then, you would create a secondary index for location. Finally, you can then query the table using getAll as much as you want!
Adding a location property to all fields without a location
For that, you'd need to first insert location: false into all rows without a location. You could do this as follows:
r.db("mydb").table("mytable")
.filter(function (row) {
return row.hasFields({ location: true }).not()
})
.update({
location: false
})
After this, you would need to find a way to insert location: false every time you add a document without a location.
Create secondary index for the table
Now that all documents have a location field, we can create a secondary index for location.
r.db("mydb").table("mytable")
.indexCreate('location')
Keep in mind that you only have to add the { location: false } and create the index only once.
Use getAll
Now we can just use getAll to query documents using the location index.
r.db("mydb").table("mytable")
.getAll(false, { index: 'location' })
This will probably be faster than the query above.
Using a secondary index (function)
You can also create a secondary index as a function. Basically, you create a function and then query the results of that function using getAll. This is probably easier and more straight-forward than what I proposed before.
Create the index
Here it is:
r.db("mydb").table("mytable")
.indexCreate('has_location',
function(x) { return x.hasFields('location');
})
Use getAll.
Here it is:
r.db("mydb").table("mytable")
.getAll(false, { index: 'has_location' })
| RethinkDB | 29,724,041 | 10 |
I want to run a function that iterates through a generator class. The generator functions would run as long as the Ratchet connection is alive. All I need to do is to make this happen after the run method is executed:
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
use MyApp\Chat;
require dirname(__DIR__) . '/xxx/vendor/autoload.php';
$server = IoServer::factory(
new HttpServer(
new WsServer(
new Chat()
)
),
8180,
'0.0.0.0'
);
$server->run();
This is the method I need to run in the server after it is started:
function generatorFunction()
{
$products = r\table("tableOne")->changes()->run($conn);
foreach ($products as $product) {
yield $product['new_val'];
}
}
Previously I was calling the function before $server->run() like this:
for ( $gen = generatorFunction(); $gen->valid(); $gen->next()) {
var_dump($gen->current());
}
$server->run();
But this doesn't allow the client to establish a connection to the Ratchet server. I suspect it never comes to $server->run() as the generator class is being iterated.
So now, I want to start the server first, then call this generator method so that it can keep listening to changes in rethinkdb.
How do I do that?
| Let's start by example:
<?php
require 'vendor/autoload.php';
class Chat implements \Ratchet\MessageComponentInterface {
function onOpen(\Ratchet\ConnectionInterface $conn) { echo "connected.\n"; }
function onClose(\Ratchet\ConnectionInterface $conn) {}
function onError(\Ratchet\ConnectionInterface $conn, \Exception $e) {}
function onMessage(\Ratchet\ConnectionInterface $from, $msg) {}
}
$loop = \React\EventLoop\Factory::create(); // create EventLoop best for given environment
$socket = new \React\Socket\Server('0.0.0.0:8180', $loop); // make a new socket to listen to (don't forget to change 'address:port' string)
$server = new \Ratchet\Server\IoServer(
/* same things that go into IoServer::factory */
new \Ratchet\Http\HttpServer(
new \Ratchet\WebSocket\WsServer(
new Chat() // dummy chat to test things out
)
),
/* our socket and loop objects */
$socket,
$loop
);
$loop->addPeriodicTimer(1, function (\React\EventLoop\Timer\Timer $timer) {
echo "echo from timer!\n";
});
$server->run();
To achieve what you need you don't have to run the loop before or after the $server->run() but it needs to be run simultaneously.
For that you need to get deeper than Ratchet - to ReactPHP and its EventLoop. If you have access to the loop interface then adding a timer (that executes once) or a periodic timer (every nth second) is a piece of cake.
| RethinkDB | 49,338,015 | 10 |
The DynamoDB Wikipedia article says that DynamoDB is a "key-value" database. However, calling it a "key-value" database completely misses an extremely fundamental feature of DynamoDB, that of the sort key: Keys have two parts (partition key and sort key) and items with the same partition key can be efficiently retrieved together sorted by the sort key.
Cassandra also has exactly the same sorting-items-inside-a-partition feature (which it calls "clustering key"), and the Cassandra Wikipedia article uses the term wide column store to describe it. However, while this term "wide column" is better than "key-value", it is still somewhat inappropriate because it describes the more general situation where an item can have a very large number of unrelated columns - not necessarily a sorted list of separate items.
So my question is whether there is a more appropriate term that can describe the data model of a database like DynamoDB and Cassandra - databases which like a key-value store can efficiently retrieve items for individual keys, but can also efficiently retrieve items sorted by the key or just a part of it (DynamoDB's sort key or Cassandra's clustering key).
| Before CQL was introduced, Cassandra adhered more strictly the wide column store data model, where you only had rows identified by a row key and containing sorted key/value columns. With the introduction of CQL, rows became known as partitions and columns could optionally be grouped in to logical rows via clustering keys.
Even until Cassandra 3.0, CQL was simply an abstraction on top of the original thrift data model and there was no concept of CQL rows within the storage engine. They were just a sorted set of columns with a compound key consisting of the concatenated values of the clustering keys. More details are given in this article. Now there is native support for CQL in the storage engine, which allows CQL data models to be stored more efficiently.
However, if you think of a CQL row as a logical grouping of columns within the same partition, Cassandra still could be considered a wide column store. In any case, there isn't, to my knowledge, another well established term to describe this kind of database.
| Scylla | 60,798,118 | 15 |
I have a table with a column of type list and I would like to check if there is an item inside the list, using CONTAINS keyword.
According to scylla documentation:
The CONTAINS operator may only be used on collection columns (lists, sets, and maps). In the case of maps, CONTAINS applies to the map values. The CONTAINS KEY operator may only be used on map columns and applies to the map keys.
https://docs.scylladb.com/getting-started/dml/
To reproduce the error I am receiving do the following:
CREATE TABLE test.persons ( id int PRIMARY KEY,lastname text, books list<text>);
INSERT INTO test.persons(id, lastname, books) values (1, 'Testopoulos',['Dracula','1984']);
SELECT * FROM test.persons
id | books | lastname
----+---------------------+-------------
1 | ['Dracula', '1984'] | Testopoulos
(1 rows)
SELECT * FROM test.persons WHERE books CONTAINS '1984' ALLOW FILTERING;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Collection filtering is not supported yet"
| Support for CONTAINS keyword for filtering is already implemented in Scylla, but it's not part of any official release yet - it will be included in the upcoming 3.1 release (or, naturally, if you build it yourself from the newest source).
Here's the reference from the official tracker: https://github.com/scylladb/scylla/issues/3573
| Scylla | 57,874,319 | 11 |
I am new to Docker, and trying to go through this tutorial setting up MemSQL from a Docker image - http://docs.memsql.com/4.0/setup/docker/ . I am on a Mac, and the tutorial uses boot2docker which seems to have been deprecated.
The VM needs 4GB memory to run. The tutorial specifies how to do this with boot2docker but I cannot find a way to do this with the docker-machine/docker toolbox.
Here is the command I am using and the error I am getting just trying to go through the tutorial without altering the boot2docker config.
docker run --rm --net=host memsql/quickstart check-system
Error: MemSQL requires at least 4 GB of memory to run.
| You can do this via the command line. For example, to change the machine from the default 1cpu/2048MB RAM run:
docker-machine stop
VBoxManage modifyvm default --cpus 2
VBoxManage modifyvm default --memory 4096
docker-machine start
You can then check your settings:
VBoxManage showvminfo default | grep Memory
VBoxManage showvminfo default | grep CPU
And for docker-machine inspect to report the correct state of things, edit ~/.docker/machine/machines/default/config.json to reflect your changes.
| SingleStore | 32,834,082 | 122 |
I realize other people have had similar questions but this uses v2 compose file format and I didn't find anything for that.
I want to make a very simple test app to play around with MemSQL but I can't get volumes to not get deleted after docker-compose down. If I've understood Docker Docs right, volumes shouldn't be deleted without explicitly telling it to. Everything seems to work with docker-compose up but after going down and then up again all data gets deleted from the database.
As recommended as a good practice, I'm using separate memsqldata service as a separate data layer.
Here's my docker-compose.yml:
version: '2'
services:
app:
build: .
links:
- memsql
memsql:
image: memsql/quickstart
volumes_from:
- memsqldata
ports:
- "3306:3306"
- "9000:9000"
memsqldata:
image: memsql/quickstart
command: /bin/true
volumes:
- memsqldatavolume:/data
volumes:
memsqldatavolume:
driver: local
| I realize this is an old and solved thread where the OP was pointing to a directory in the container rather than the volume they had mounted, but wanted to clear up some of the misinformation I'm seeing.
docker-compose down does not remove volumes, you need to run docker-compose down -v if you also want to delete volumes. Here's the help text straight from docker-compose (note the "by default" list):
$ docker-compose down --help
Stops containers and removes containers, networks, volumes, and images
created by `up`.
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
...
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
...
$ docker-compose --version
docker-compose version 1.12.0, build b31ff33
Here's a sample yml with a named volume to test and a dummy command:
$ cat docker-compose.vol-named.yml
version: '2'
volumes:
data:
services:
test:
image: busybox
command: tail -f /dev/null
volumes:
- data:/data
$ docker-compose -f docker-compose.vol-named.yml up -d
Creating volume "test_data" with default driver
Creating test_test_1
After starting the container, the volume is initialized empty since the image is empty at that location. I created a quick hello world in that location:
$ docker exec -it test_test_1 /bin/sh
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 4096 May 23 01:24 .
drwxr-xr-x 1 root root 4096 May 23 01:24 ..
/ # echo "hello volume" >/data/hello.txt
/ # ls -al /data
total 12
drwxr-xr-x 2 root root 4096 May 23 01:24 .
drwxr-xr-x 1 root root 4096 May 23 01:24 ..
-rw-r--r-- 1 root root 13 May 23 01:24 hello.txt
/ # cat /data/hello.txt
hello volume
/ # exit
The volume is visible outside of docker and is still there after a docker-compose down:
$ docker volume ls | grep test_
local test_data
$ docker-compose -f docker-compose.vol-named.yml down
Stopping test_test_1 ... done
Removing test_test_1 ... done
Removing network test_default
$ docker volume ls | grep test_
local test_data
Recreating the container uses the old volume with the file still visible inside:
$ docker-compose -f docker-compose.vol-named.yml up -d
Creating network "test_default" with the default driver
Creating test_test_1
$ docker exec -it test_test_1 /bin/sh
/ # cat /data/hello.txt
hello volume
/ # exit
And running a docker-compose down -v finally removes both the container and the volume:
$ docker-compose -f docker-compose.vol-named.yml down -v
Stopping test_test_1 ... done
Removing test_test_1 ... done
Removing network test_default
Removing volume test_data
$ docker volume ls | grep test_
$
If you find your data is only being persisted if you use a stop/start rather than a down/up, then your data is being stored in the container (or possibly an anonymous volume) rather than your named volume, and the container is not persistent. Make sure the location for your data inside the container is correct to avoid this.
To debug where data is being stored in your container, I'd recommend using docker diff on a container. That will show all of the files created, modified, or deleted inside that container which will be lost when the container is deleted. E.g.:
$ docker run --name test-diff busybox \
/bin/sh -c "echo hello docker >/etc/hello.conf"
$ docker diff test-diff
C /etc
A /etc/hello.conf
| SingleStore | 35,620,997 | 26 |
MemSQL is claiming to be the "Worlds Fastest Database"
Faster than DB2, Oracle, mySQL and SQL server. Can anyone vouch for this?
I have searched the web and tried to gather as much information as possible about this. MemSQL is claiming to be the fastest database on the planet. Faster than Oracle, DB2, MySQL and MS SQL.
I have even spoken with their staff, founders (facebook ex employees) about their product. I have also seen the witty bench mark tests everyone is piping about. Is it really worth the move? Ashton Kutcher being a Angel Venture Capitalist behind the group does not show me real worth. I mean, Google BigTable would be a better move for some.
Can anyone share tips, articles, tutorials.. even some of their real customers. Their website does not show real examples and it lacks a kind of community you get with other database products - commercial too.
There is hardly anything out there on it and I really want a stack look at it. I am bought in by the hype I admit, it looks great, the videos.. the documentation and the brand.. but really.. is it that good? The commercial product is only limited to 32gb and I know many databases easily complete that.
| The MemSQL lacks community support, we have no information about the enterprise license (there is available a version for more than 32GB) and maybe there is no proof of speed for your application yet.
However you have to try it. Really. The performance for static queries and prepared statements are promising, the best use case I could emphasize is for low latency writing/updating records at extreme concurrency. You can't achieve performance like this with relational databases and the NoSQL solutions with this performance are rare.
In short time it will get SQL92 compliency, so the development and testing will be easier for you or for your developers (opposed to a NoSQL DB backend). There are thousands of applications with built-in performance benchmarks and long-running stability testing, choose the one which matches your case most. Personally I've tested Drupal, the worst performance was the same performance as a heavily customized MySQL configuration, on average it became 10x faster on DB side for logged in users.
| SingleStore | 13,892,907 | 20 |
Using Spark 1.4.0, I am trying to insert data from a Spark DataFrame into a MemSQL database (which should be exactly like interacting with a MySQL database) using insertIntoJdbc(). However I keep getting a Runtime TableAlreadyExists exception.
First I create the MemSQL table like this:
CREATE TABLE IF NOT EXISTS table1 (id INT AUTO_INCREMENT PRIMARY KEY, val INT);
Then I create a simple dataframe in Spark and try to insert into MemSQL like this:
val df = sc.parallelize(Array(123,234)).toDF.toDF("val")
//df: org.apache.spark.sql.DataFrame = [val: int]
df.insertIntoJDBC("jdbc:mysql://172.17.01:3306/test?user=root", "table1", false)
java.lang.RuntimeException: Table table1 already exists.
| This solution applies to general JDBC connections, although the answer by @wayne is probably a better solution for memSQL specifically.
insertIntoJdbc seems to have been deprecated as of 1.4.0, and using it actually calls write.jdbc().
write() returns a DataFrameWriter object. If you want to append data to your table you will have to change the save mode of the object to "append".
Another issue with the example in the question above is the DataFrame schema didn't match the schema of the target table.
The code below gives a working example from the Spark shell. I am using spark-shell --driver-class-path mysql-connector-java-5.1.36-bin.jar to start my spark-shell session.
import java.util.Properties
val prop = new Properties()
prop.put("user", "root")
prop.put("password", "")
val df = sc.parallelize(Array((1,234), (2,1233))).toDF.toDF("id", "val")
val dfWriter = df.write.mode("append")
dfWriter.jdbc("jdbc:mysql://172.17.01:3306/test", "table1", prop)
| SingleStore | 32,915,682 | 12 |
According to this SQL join cheat-sheet, a left outer join on one column is the following :
SELECT *
FROM a
LEFT JOIN b
ON a.foo = b.foo
WHERE b.foo IS NULL
I'm wondering what it would look like with a join on multiple columns, should it be an OR or an AND in the WHERE clause ?
SELECT *
FROM a
LEFT JOIN b
ON a.foo = b.foo
AND a.bar = b.bar
AND a.ter = b.ter
WHERE b.foo IS NULL
OR b.bar IS NULL
OR b.ter IS NULL
or
SELECT *
FROM a
LEFT JOIN b
ON a.foo = b.foo
AND a.bar = b.bar
AND a.ter = b.ter
WHERE b.foo IS NULL
AND b.bar IS NULL
AND b.ter IS NULL
?
(I don't think it does, but in case it matters, the db engine is Vertica's)
(I'm betting on the OR one)
| That depends on whether the columns are nullable, but assuming they are not, checking any of them will do:
SELECT *
FROM a
LEFT JOIN b
ON a.foo = b.foo
AND a.bar = b.bar
AND a.ter = b.ter
WHERE b.foo IS NULL -- this could also be bar or ter
This is because after a successful join, all three columns will have a non-null value.
If some of these columns were nullable and you'd like to check if any one of them had a value after the join, then your first (OR) approach would be OK.
| Vertica | 40,015,779 | 19 |
Is there any way I can store the last iterated row result and use that for next row iteration?
For example I have a table say(Time_Table).
__ Key type timeStamp
1 ) 1 B 2015-06-28 09:00:00
2 ) 1 B 2015-06-28 10:00:00
3 ) 1 C 2015-06-28 11:00:00
4 ) 1 A 2015-06-28 12:00:00
5 ) 1 B 2015-06-28 13:00:00
Now suppose I have an exceptionTime of 90 minutes which is constant.
If I start checking my Time_Table then:
for the first row, as there is no row before 09:00:00, it will directly put this record into my target table. Now my reference point is at 9:00:00.
For the second row at 10:00:00, the last reference point was 09:00:00 and TIMESTAMPDIFF(s,09:00:00,10:00:00) is 60 which is less than the required 90. I do not add this row to my target table.
For the third row, the last recorded exception was at 09:00:00 and the TIMESTAMPDIFF(s,09:00:00,11:00:00) is 120 which is greater than the required 90 so I choose this record and set reference point to 11:00:00.
For the fourth row the TIMESTAMPDIFF(s,11:00:00,12:00:00). Similarly it will not be saved.
This one is again saved.
Target table
__ Key type timeStamp
1 ) 1 B 2015-06-28 09:00:00
2 ) 1 C 2015-06-28 11:00:00
3 ) 1 B 2015-06-28 13:00:00
Is there any way that I can solve this problem purely in SQL?
My approach:
SELECT * FROM Time_Table A WHERE NOT EXISTS(
SELECT 1 FROM Time_Table B
WHERE A.timeStamp > B.timeStamp
AND abs(TIMESTAMPDIFF(s,B.timeStamp,A.timeStamp)) > 90
)
But this will not actually working.
| This is not possible using just pure SQL in Vertica. To do this in pure SQL you need to be able to perform a recursive query which is not supported in the Vertica product. In other database products you can do this using a WITH clause. For Vertica you are going to have to do it in the application logic. This is based on the statement "Each WITH clause within a query block must have a unique name. Attempting to use same-name aliases for WITH clause query names within the same query block causes an error. WITH clauses do not support INSERT, DELETE, and UPDATE statements, and you cannot use them recursively" from Vertica 7.1.x documentation
| Vertica | 37,020,449 | 15 |
I'm using the following:
DRIVER={Vertica ODBC Driver 4.1};
SERVER=lnxtabdb01.xxxx.com;
PORT=5433;
DATABASE=vertica;
USER=dbadmin;
PASSWORD=vertica;
OPTION=3;
i'm getting this error and I just wanted to make sure that my connection string was cool before I check other possible issues.
error:
EnvironmentError: System.Data.Odbc.OdbcException (0x80131937): ERROR [28000] FATAL: no Vertica user name specified in startup packet
UPDATE:
For now i'm just using a System Data Source Name in Windows Vista that I can use. But i'd still like to know if there's an odbc connection string so that i don't have to set that up on every machine that will be connecting to the Vertica DB in this fashion.
well, I tried a postgresql connection string that looks like this:
Host=lnxtabdb01.xxxx.com;
Port=5433;
Database=vertica;
User ID=dbadmin;
Password=vertica;
Pooling=true;
OPTION=3;
Min Pool Size=0;
Max Pool Size=100;
Connection Lifetime=0;
now i'm getting this:
EnvironmentError: System.Data.Odbc.OdbcException (0x80131937): ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
| The accepted answer describes a way to connect with the Vertica ODBC driver using a System DSN. It is possible to connect using just a connection string to directly configure the connection against the driver. The following connection string pattern has been tested against the Vertica ODBC Client Driver v6.1.2:
Driver=Vertica;Server=MyVerticaServer;Port=5433;Database=MyVerticaDB;UID=foo;PWD=bar
Port is optional:
Driver=Vertica;Server=MyVerticaServer;Database=MyVerticaDB;UID=foo;PWD=bar
Or, if you're doing this in .NET as I am, you can use this to format up the connection string from the necessary parameters:
var connectionString = string.Format(
"Driver=Vertica;Server={0};{1}Database={2};UID={3};PWD={4}",
server,
port == null ? string.Empty : string.Format("Port={0};", port),
database,
username,
password);
| Vertica | 5,807,510 | 10 |
Hi I have configured the DSN settings for vertica in Ubuntu 10.10 32 bit version machine.
The settings are all fine and I have cross checked them.
Here is my odbc.ini file:
[VerticaDSN]
Description = VerticaDSN ODBC driver
Driver = /opt/vertica/lib/libverticaodbc_unixodbc.so
Servername = myservername
Database = mydbname
Port = 5433
UserName = myuname
Password = *******
Locale = en_US
Similarly I have a odbcinst.ini file.
when I run the command: isql -v VerticaDSN I get the following error:
[S1000][unixODBC][DSI] The error message NoSQLGetPrivateProfileString could not be found in the en-US locale. Check that /en-US/ODBCMessages.xml exists.
[ISQL]ERROR: Could not SQLConnect.
I have tried everything but I am not able to decipher this error.
Any help will be greatly appreciated.
| You may be missing the Driver configuration section. Edit or create the file /etc/vertica.ini with the following content:
[Driver]
DriverManagerEncoding=UTF-16
ODBCInstLib=/usr/lib64/libodbcinst.so
ErrorMessagesPath=/opt/vertica/lib64
LogLevel=4
LogPath=/tmp
More information can be found in the Vertica Programmer's Guide in the section "Location of the Additional Driver Settings".
| Vertica | 9,778,033 | 10 |
Anyone know of a handy function to search through column_names in Vertica? From the documentation, it seems like \d only queries table_names. I'm looking for something like MySQL's information_schema.columns, but can't find any information about a similar table of meta-data.
Thanks!
| In 5.1 if you have enough permissions you can do
SELECT * FROM v_catalog.columns;
to access columns's info, for some things you'll need to join with
v_catalog.tables
| Vertica | 10,047,469 | 10 |
I read about Voltdb's command log. The command log records the transaction invocations instead of each row change as in a write-ahead log. By recording only the invocation, the command logs are kept to a bare minimum, limiting the impact the disk I/O will have on performance.
Can anyone explain the database theory behind why Voltdb uses a command log and why the standard SQL databases such as Postgres, MySQL, SQLServer, Oracle use a write-ahead log?
| I think it is better to rephrase:
Why does new distributed VoltDB use a command log over write-ahead log?
Let's do an experiment and imagine you are going to write your own storage/database implementation. Undoubtedly you are advanced enough to abstract a file system and use block storage along with some additional optimizations.
Some basic terminology:
State : stored information at a given point of time
Command : directive to the storage to change its state
So your database may look like the following:
Next step is to execute some command:
Please note several important aspects:
A command may affect many stored entities, so many blocks will get dirty
Next state is a function of the current state and the command
Some intermediate states can be skipped, because it is enough to have a chain of commands instead.
Finally, you need to guarantee data integrity.
Write-Ahead Logging - central concept is that State changes should be logged before any heavy update to permanent storage. Following our idea we can log incremental changes for each block.
Command Logging - central concept is to log only Command, which is used to produce the state.
There are Pros and Cons for both approaches. Write-Ahead log contains all changed data, Command log will require addition processing, but fast and lightweight.
VoltDB: Command Logging and Recovery
The key to command logging is that it logs the invocations, not the
consequences, of the transactions. By recording only the invocation,
the command logs are kept to a bare minimum, limiting the impact the disk I/O will
have on performance.
Additional notes
SQLite: Write-Ahead Logging
The traditional rollback journal works by writing a copy of the
original unchanged database content into a separate rollback journal
file and then writing changes directly into the database file.
A COMMIT occurs when a special record indicating a commit is appended
to the WAL. Thus a COMMIT can happen without ever writing to the
original database, which allows readers to continue operating from the
original unaltered database while changes are simultaneously being
committed into the WAL.
PostgreSQL: Write-Ahead Logging (WAL)
Using WAL results in a significantly reduced number of disk writes,
because only the log file needs to be flushed to disk to guarantee
that a transaction is committed, rather than every data file changed
by the transaction.
The log file is written sequentially, and so the
cost of syncing the log is much less than the cost of flushing the
data pages. This is especially true for servers handling many small
transactions touching different parts of the data store. Furthermore,
when the server is processing many small concurrent transactions, one
fsync of the log file may suffice to commit many transactions.
Conclusion
Command Logging:
is faster
has lower footprint
has heavier "Replay" procedure
requires frequent snapshot
Write Ahead Logging is a technique to provide atomicity. Better Command Logging performance should also improve transaction processing. Databases on 1 Foot
Confirmation
VoltDB Blog: Intro to VoltDB Command Logging
One advantage of command logging over ARIES style logging is that a
transaction can be logged before execution begins instead of executing
the transaction and waiting for the log data to flush to disk. Another
advantage is that the IO throughput necessary for a command log is
bounded by the network used to relay commands and, in the case of
Gig-E, this throughput can be satisfied by cheap commodity disks.
It is important to remember VoltDB is distributed by its nature. So transactions are a little bit tricky to handle and performance impact is noticeable.
VoltDB Blog: VoltDB’s New Command Logging Feature
The command log in VoltDB consists of stored procedure invocations and
their parameters. A log is created at each node, and each log is
replicated because all work is replicated to multiple nodes. This
results in a replicated command log that can be de-duped at replay
time. Because VoltDB transactions are strongly ordered, the command
log contains ordering information as well. Thus the replay can occur
in the exact order the original transactions ran in, with the full
transaction isolation VoltDB offers. Since the invocations themselves
are often smaller than the modified data, and can be logged before
they are committed, this approach has a very modest effect on
performance. This means VoltDB users can achieve the same kind of
stratospheric performance numbers, with additional durability
assurances.
| VoltDB | 14,181,180 | 59 |
I am quite new for zookeeper port through which I am coming across from past few days.
I introduced with zookeeper port keyword at two occasion:
while configuring neo4j db cluster (link) and
while running compiled voltdb catalog (link) (See Network Configuration Arguments)
Then, I came across Apache Zookeeper, (which I guess is related to distributed application, I am a newbie in distributed application as well). hence question came in my mind:
is there any implementation of apache zookeeper in above 2 scenarios ?
What exactly this zookeeper port do internally ?
Any help would be appreciated, Thanks.
| Zookeeper is used in distributed applications mainly for configuration management and high availability operations. Zookeeper does this by a Master-Slave architecture. Neo4j and VoltDb might be using zookeeper for this purpose
Coming to the ports understanding :
suppose u have 3 servers for zookeepers ... You need to mention in configuration as
clientPort=2181
server.1=zookeeper1:2888:3888
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888
Out of these one server will be the master and rest all will be slaves.If any server goes OFF then zookeeper elects leader automatically .
Servers listen on three ports: 2181 for client connections; 2888 for
follower connections, if they are the leader; and 3888 for other
server connections during the leader election phase .
| VoltDB | 18,168,541 | 27 |
I've been working on a project for over half a year now, building healthcare software from the ground up. When I joined up, MySQL had been chosen as the primary data store.
A few months and many headaches later, we've begun to investigate alternative data stores that can offer the flexibility we need to record our critical and ever-changing healthcare data.
We've looked at many NoSQL solutions; MongoDB drawing the most of our attention. Being able to store structured, embedded data would be a huge benefit. We've been scared off by reports of data loss/reliability issues, however.
I've come across a few "NewSQL" data stores and I'm interested in VoltDB in particular.
I'm curious to know if anyone has any experience with Volt or has seen it implemented in a project.
Edit:
Data integrity and consistency are most important. It could be very harmful for a patients information to be lost, they may receive improper treatment etc.
Data volume will vary; we will likely support small practices first. Something like 700 users total. But even when we scale up to hospitals, we're not looking at social media like traffic.
Regarding your question, yes data structures will evolve. On top of having to change the existing structure to capture new or modified inputs, we have to preserve the structure of the existing data as a sort of snap-shot. We've only been able to do this EAV style with MySQL.
Thanks for your feedback.
| We went live last year with an application that uses VoltDB. We're storing around 1.5 billion records and processing 50-90 million transactions a day with a kfactor=1 4 server cluster ( 256 GB memory/server ). Given the performance of VoltDB, we could easily be handing 1 billion transactions a day.
To date, we have had no problems related to the VoltDB software. Our experience is that it is truly ACID compliant. With the addition of the Command Logging feature, I believe you can configure the logging parameters to preclude the loss of any transactions.
Other strong features include its scalability ( and the relative simplicity to add capacity ).
An important consideration when choosing VoltDB is understanding VoltDB's partitioning scheme. Achieving the extremely high transaction rates possible with VoltDB depends on the parallelism achieved through data partitioning. The partitioning is transparent to your application, but your application data must lend itself to being partitioned to get the maximum performance. If your data does not lend itself to partitioning, I believe the primary impact would be reduced throughput ( i.e. transaction rates ) - not a show-stopper.
Finally - a note concerning stored procedures. VoltDB allows you to replace stored procedures without stopping the database. Also, each invocation of a stored procedure constitutes a single transaction. We have leveraged stored procedures in such a way that we are able to modify/update the our application logic without stopping the database.
| VoltDB | 9,285,335 | 13 |
We're using a Ruby web-app with Redis server for caching. Is there a point to test Memcached instead?
What will give us better performance? Any pros or cons between Redis and Memcached?
Points to consider:
Read/write speed.
Memory usage.
Disk I/O dumping.
Scaling.
| Summary (TL;DR)
Updated June 3rd, 2017
Redis is more powerful, more popular, and better supported than memcached. Memcached can only do a small fraction of the things Redis can do. Redis is better even where their features overlap.
For anything new, use Redis.
Memcached vs Redis: Direct Comparison
Both tools are powerful, fast, in-memory data stores that are useful as a cache. Both can help speed up your application by caching database results, HTML fragments, or anything else that might be expensive to generate.
Points to Consider
When used for the same thing, here is how they compare using the original question's "Points to Consider":
Read/write speed: Both are extremely fast. Benchmarks vary by workload, versions, and many other factors but generally show redis to be as fast or almost as fast as memcached. I recommend redis, but not because memcached is slow. It's not.
Memory usage: Redis is better.
memcached: You specify the cache size and as you insert items the daemon quickly grows to a little more than this size. There is never really a way to reclaim any of that space, short of restarting memcached. All your keys could be expired, you could flush the database, and it would still use the full chunk of RAM you configured it with.
redis: Setting a max size is up to you. Redis will never use more than it has to and will give you back memory it is no longer using.
I stored 100,000 ~2KB strings (~200MB) of random sentences into both. Memcached RAM usage grew to ~225MB. Redis RAM usage grew to ~228MB. After flushing both, redis dropped to ~29MB and memcached stayed at ~225MB. They are similarly efficient in how they store data, but only one is capable of reclaiming it.
Disk I/O dumping: A clear win for redis since it does this by default and has very configurable persistence. Memcached has no mechanisms for dumping to disk without 3rd party tools.
Scaling: Both give you tons of headroom before you need more than a single instance as a cache. Redis includes tools to help you go beyond that while memcached does not.
memcached
Memcached is a simple volatile cache server. It allows you to store key/value pairs where the value is limited to being a string up to 1MB.
It's good at this, but that's all it does. You can access those values by their key at extremely high speed, often saturating available network or even memory bandwidth.
When you restart memcached your data is gone. This is fine for a cache. You shouldn't store anything important there.
If you need high performance or high availability there are 3rd party tools, products, and services available.
redis
Redis can do the same jobs as memcached can, and can do them better.
Redis can act as a cache as well. It can store key/value pairs too. In redis they can even be up to 512MB.
You can turn off persistence and it will happily lose your data on restart too. If you want your cache to survive restarts it lets you do that as well. In fact, that's the default.
It's super fast too, often limited by network or memory bandwidth.
If one instance of redis/memcached isn't enough performance for your workload, redis is the clear choice. Redis includes cluster support and comes with high availability tools (redis-sentinel) right "in the box". Over the past few years redis has also emerged as the clear leader in 3rd party tooling. Companies like Redis Labs, Amazon, and others offer many useful redis tools and services. The ecosystem around redis is much larger. The number of large scale deployments is now likely greater than for memcached.
The Redis Superset
Redis is more than a cache. It is an in-memory data structure server. Below you will find a quick overview of things Redis can do beyond being a simple key/value cache like memcached. Most of redis' features are things memcached cannot do.
Documentation
Redis is better documented than memcached. While this can be subjective, it seems to be more and more true all the time.
redis.io is a fantastic easily navigated resource. It lets you try redis in the browser and even gives you live interactive examples with each command in the docs.
There are now 2x as many stackoverflow results for redis as memcached. 2x as many Google results. More readily accessible examples in more languages. More active development. More active client development. These measurements might not mean much individually, but in combination they paint a clear picture that support and documentation for redis is greater and much more up-to-date.
Persistence
By default redis persists your data to disk using a mechanism called snapshotting. If you have enough RAM available it's able to write all of your data to disk with almost no performance degradation. It's almost free!
In snapshot mode there is a chance that a sudden crash could result in a small amount of lost data. If you absolutely need to make sure no data is ever lost, don't worry, redis has your back there too with AOF (Append Only File) mode. In this persistence mode data can be synced to disk as it is written. This can reduce maximum write throughput to however fast your disk can write, but should still be quite fast.
There are many configuration options to fine tune persistence if you need, but the defaults are very sensible. These options make it easy to setup redis as a safe, redundant place to store data. It is a real database.
Many Data Types
Memcached is limited to strings, but Redis is a data structure server that can serve up many different data types. It also provides the commands you need to make the most of those data types.
Strings (commands)
Simple text or binary values that can be up to 512MB in size. This is the only data type redis and memcached share, though memcached strings are limited to 1MB.
Redis gives you more tools for leveraging this datatype by offering commands for bitwise operations, bit-level manipulation, floating point increment/decrement support, range queries, and multi-key operations. Memcached doesn't support any of that.
Strings are useful for all sorts of use cases, which is why memcached is fairly useful with this data type alone.
Hashes (commands)
Hashes are sort of like a key value store within a key value store. They map between string fields and string values. Field->value maps using a hash are slightly more space efficient than key->value maps using regular strings.
Hashes are useful as a namespace, or when you want to logically group many keys. With a hash you can grab all the members efficiently, expire all the members together, delete all the members together, etc. Great for any use case where you have several key/value pairs that need to grouped.
One example use of a hash is for storing user profiles between applications. A redis hash stored with the user ID as the key will allow you to store as many bits of data about a user as needed while keeping them stored under a single key. The advantage of using a hash instead of serializing the profile into a string is that you can have different applications read/write different fields within the user profile without having to worry about one app overriding changes made by others (which can happen if you serialize stale data).
Lists (commands)
Redis lists are ordered collections of strings. They are optimized for inserting, reading, or removing values from the top or bottom (aka: left or right) of the list.
Redis provides many commands for leveraging lists, including commands to push/pop items, push/pop between lists, truncate lists, perform range queries, etc.
Lists make great durable, atomic, queues. These work great for job queues, logs, buffers, and many other use cases.
Sets (commands)
Sets are unordered collections of unique values. They are optimized to let you quickly check if a value is in the set, quickly add/remove values, and to measure overlap with other sets.
These are great for things like access control lists, unique visitor trackers, and many other things. Most programming languages have something similar (usually called a Set). This is like that, only distributed.
Redis provides several commands to manage sets. Obvious ones like adding, removing, and checking the set are present. So are less obvious commands like popping/reading a random item and commands for performing unions and intersections with other sets.
Sorted Sets (commands)
Sorted Sets are also collections of unique values. These ones, as the name implies, are ordered. They are ordered by a score, then lexicographically.
This data type is optimized for quick lookups by score. Getting the highest, lowest, or any range of values in between is extremely fast.
If you add users to a sorted set along with their high score, you have yourself a perfect leader-board. As new high scores come in, just add them to the set again with their high score and it will re-order your leader-board. Also great for keeping track of the last time users visited and who is active in your application.
Storing values with the same score causes them to be ordered lexicographically (think alphabetically). This can be useful for things like auto-complete features.
Many of the sorted set commands are similar to commands for sets, sometimes with an additional score parameter. Also included are commands for managing scores and querying by score.
Geo
Redis has several commands for storing, retrieving, and measuring geographic data. This includes radius queries and measuring distances between points.
Technically geographic data in redis is stored within sorted sets, so this isn't a truly separate data type. It is more of an extension on top of sorted sets.
Bitmap and HyperLogLog
Like geo, these aren't completely separate data types. These are commands that allow you to treat string data as if it's either a bitmap or a hyperloglog.
Bitmaps are what the bit-level operators I referenced under Strings are for. This data type was the basic building block for reddit's recent collaborative art project: r/Place.
HyperLogLog allows you to use a constant extremely small amount of space to count almost unlimited unique values with shocking accuracy. Using only ~16KB you could efficiently count the number of unique visitors to your site, even if that number is in the millions.
Transactions and Atomicity
Commands in redis are atomic, meaning you can be sure that as soon as you write a value to redis that value is visible to all clients connected to redis. There is no wait for that value to propagate. Technically memcached is atomic as well, but with redis adding all this functionality beyond memcached it is worth noting and somewhat impressive that all these additional data types and features are also atomic.
While not quite the same as transactions in relational databases, redis also has transactions that use "optimistic locking" (WATCH/MULTI/EXEC).
Pipelining
Redis provides a feature called 'pipelining'. If you have many redis commands you want to execute you can use pipelining to send them to redis all-at-once instead of one-at-a-time.
Normally when you execute a command to either redis or memcached, each command is a separate request/response cycle. With pipelining, redis can buffer several commands and execute them all at once, responding with all of the responses to all of your commands in a single reply.
This can allow you to achieve even greater throughput on bulk importing or other actions that involve lots of commands.
Pub/Sub
Redis has commands dedicated to pub/sub functionality, allowing redis to act as a high speed message broadcaster. This allows a single client to publish messages to many other clients connected to a channel.
Redis does pub/sub as well as almost any tool. Dedicated message brokers like RabbitMQ may have advantages in certain areas, but the fact that the same server can also give you persistent durable queues and other data structures your pub/sub workloads likely need, Redis will often prove to be the best and most simple tool for the job.
Lua Scripting
You can kind of think of lua scripts like redis's own SQL or stored procedures. It's both more and less than that, but the analogy mostly works.
Maybe you have complex calculations you want redis to perform. Maybe you can't afford to have your transactions roll back and need guarantees every step of a complex process will happen atomically. These problems and many more can be solved with lua scripting.
The entire script is executed atomically, so if you can fit your logic into a lua script you can often avoid messing with optimistic locking transactions.
Scaling
As mentioned above, redis includes built in support for clustering and is bundled with its own high availability tool called redis-sentinel.
Conclusion
Without hesitation I would recommend redis over memcached for any new projects, or existing projects that don't already use memcached.
The above may sound like I don't like memcached. On the contrary: it is a powerful, simple, stable, mature, and hardened tool. There are even some use cases where it's a little faster than redis. I love memcached. I just don't think it makes much sense for future development.
Redis does everything memcached does, often better. Any performance advantage for memcached is minor and workload specific. There are also workloads for which redis will be faster, and many more workloads that redis can do which memcached simply can't. The tiny performance differences seem minor in the face of the giant gulf in functionality and the fact that both tools are so fast and efficient they may very well be the last piece of your infrastructure you'll ever have to worry about scaling.
There is only one scenario where memcached makes more sense: where memcached is already in use as a cache. If you are already caching with memcached then keep using it, if it meets your needs. It is likely not worth the effort to move to redis and if you are going to use redis just for caching it may not offer enough benefit to be worth your time. If memcached isn't meeting your needs, then you should probably move to redis. This is true whether you need to scale beyond memcached or you need additional functionality.
| Redis | 10,558,465 | 1,482 |
I want to delete all keys. I want everything wiped out and give me a blank database.
Is there a way to do this in Redis client?
| With redis-cli:
FLUSHDB – Deletes all keys from the connection's current database.
FLUSHALL – Deletes all keys from all databases on current host.
For example, in your shell:
redis-cli flushall
| Redis | 6,851,909 | 931 |
In my Redis DB I have a number of prefix:<numeric_id> hashes.
Sometimes I want to purge them all automatically. How do I do this without using some distributed locking mechanism?
| Execute in bash:
redis-cli KEYS "prefix:*" | xargs redis-cli DEL
UPDATE
Ok, i understood. What about this way: store current additional incremental prefix and add it to all your keys. For example:
You have values like this:
prefix_prefix_actuall = 2
prefix:2:1 = 4
prefix:2:2 = 10
When you need to purge data, you change prefix_actuall first (for example set prefix_prefix_actuall = 3), so your application will write new data to keys prefix:3:1 and prefix:3:2. Then you can safely take old values from prefix:2:1 and prefix:2:2 and purge old keys.
| Redis | 4,006,324 | 791 |
Is there a Redis command for fetching all keys in the database? I have seen some python-redis libraries fetching them. But was wondering if it is possible from redis-client.
| Try to look at KEYS command. KEYS * will list all keys stored in redis.
EDIT: please note the warning at the top of KEYS documentation page:
Time complexity: O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length.
UPDATE (V2.8 or greater): SCAN is a superior alternative to KEYS, in the sense that it does not block the server nor does it consume significant resources. Prefer using it.
| Redis | 5,252,099 | 734 |
During writes to Redis ( SET foo bar ) I am getting the following error:
MISCONF Redis is configured to save RDB snapshots, but is currently
not able to persist on disk. Commands that may modify the data set are
disabled. Please check Redis logs for details about the error.
Basically I understand that the problem is that redis is not able to save data on the disk, but have no idea how to get rid of the problem.
Also the following question has the same problem, it is abandoned long time ago with no answers and most probably no attempts to solve the problem.
| Restart your redis server.
macOS (brew): brew services restart redis.
Linux: sudo service redis restart / sudo systemctl restart redis
Windows: Windows + R -> Type services.msc, Enter -> Search for Redis then click on restart.
I had this issue after upgrading redis with Brew (brew upgrade).
Once I restarted my laptop, it immediately worked.
| Redis | 19,581,059 | 677 |
I apparently have a redis-server instance running because when I try to start a new server by entering redis-server, I'm greeted with the following:
Opening port: bind: Address already in use
I can't figure out how to stop this server and start a new one.
Is there any command I can append to redis-server when I'm typing in the CLI?
My OS is Ubuntu 10.04.
| Either connect to node instance and use shutdown command or if you are on ubuntu you can try to restart redis server through init.d:
/etc/init.d/redis-server restart
or stop/start it:
/etc/init.d/redis-server stop
/etc/init.d/redis-server start
On Mac
redis-cli shutdown
| Redis | 6,910,378 | 544 |
I am a learner in Node.js.
What's Express.js?
What's the purpose of it with Node.js?
Why do we actually need Express.js? How is it useful for us to use with Node.js?
What's Redis? Does it come with Express.js?
|
1) What is Express.js?
Express.js is a Node.js framework. It's the most popular framework as of now (the most starred on NPM).
.
It's built around configuration and granular simplicity of Connect middleware. Some people compare Express.js to Ruby Sinatra vs. the bulky and opinionated Ruby on Rails.
2) What is the purpose of it with Node.js?
That you don't have to repeat same code over and over again. Node.js is a low-level I/O mechanism which has an HTTP module. If you just use an HTTP module, a lot of work like parsing the payload, cookies, storing sessions (in memory or in Redis), selecting the right route pattern based on regular expressions will have to be re-implemented. With Express.js, it is just there for you to use.
3) Why do we actually need Express.js? How it is useful for us to use with Node.js?
The first answer should answer your question. If no, then try to write a small REST API server in plain Node.js (that is, using only core modules) and then in Express.js. The latter will take you 5-10x less time and lines of code.
What is Redis? Does it come with Express.js?
Redis is a fast persistent key-value storage. You can optionally use it for storing sessions with Express.js, but you don't need to. By default, Express.js has memory storage for sessions. Redis also can be use for queueing jobs, for example, email jobs.
Check out my tutorial on REST API server with Express.js.
MVC but not by itself
Express.js is not an model-view-controller framework by itself. You need to bring your own object-relational mapping libraries such as Mongoose for MongoDB, Sequelize (http://sequelizejs.com) for SQL databases, Waterline (https://github.com/balderdashy/waterline) for many databases into the stack.
Alternatives
Other Node.js frameworks to consider (https://www.quora.com/Node-js/Which-Node-js-framework-is-best-for-building-a-RESTful-API):
UPDATE: I put together this resource that aid people in choosing Node.js frameworks: http://nodeframework.com
UPDATE2: We added some GitHub stats to nodeframework.com so now you can compare the level of social proof (GitHub stars) for 30+ frameworks on one page.
Full-stack:
http://sailsjs.org
http://derbyjs.com/
Just REST API:
http://mcavage.github.io/node-restify/
Ruby on Rails like:
http://railwayjs.com/
http://geddyjs.org/
Sinatra like:
http://expressjs.com/
Other:
http://flatironjs.org/
https://github.com/isaacs/npm-www
http://frisbyjs.com/
Middleware:
http://www.senchalabs.org/connect/
Static site generators:
http://docpad.org
https://github.com/jnordberg/wintersmith
http://blacksmith.jit.su/
https://github.com/felixge/node-romulus
https://github.com/caolan/petrify
| Redis | 12,616,153 | 504 |
What I want is not a comparison between Redis and MongoDB. I know they are different; the performance and the API is totally different.
Redis is very fast, but the API is very 'atomic'. MongoDB will eat more resources, but the API is very very easy to use, and I am very happy with it.
They're both awesome, and I want to use Redis in deployment as much as I can, but it is hard to code. I want to use MongoDB in development as much as I can, but it needs an expensive machine.
So what do you think about the use of both of them? When to pick Redis? When to pick MongoDB?
| I would say, it depends on kind of dev team you are and your application needs.
For example, if you require a lot of querying, that mostly means it would be more work for your developers to use Redis, where your data might be stored in variety of specialized data structures, customized for each type of object for efficiency. In MongoDB the same queries might be easier because the structure is more consistent across your data. On the other hand, in Redis, sheer speed of the response to those queries is the payoff for the extra work of dealing with the variety of structures your data might be stored with.
MongoDB offers simplicity, much shorter learning curve for developers with traditional DB and SQL experience. However, Redis's non-traditional approach requires more effort to learn, but greater flexibility.
Eg. A cache layer can probably be better implemented in Redis. For more schema-able data, MongoDB is better. [Note: both MongoDB and Redis are technically schemaless]
If you ask me, my personal choice is Redis for most requirements.
Lastly, I hope by now you have seen http://antirez.com/post/MongoDB-and-Redis.html
| Redis | 5,400,163 | 466 |
I want to store a JSON payload into redis. There's really 2 ways I can do this:
One using a simple string keys and values.
key:user, value:payload (the entire JSON blob which can be 100-200 KB)
SET user:1 payload
Using hashes
HSET user:1 username "someone"
HSET user:1 location "NY"
HSET user:1 bio "STRING WITH OVER 100 lines"
Keep in mind that if I use a hash, the value length isn't predictable. They're not all short such as the bio example above.
Which is more memory efficient? Using string keys and values, or using a hash?
| This article can provide a lot of insight here: http://redis.io/topics/memory-optimization
There are many ways to store an array of Objects in Redis (spoiler: I like option 1 for most use cases):
Store the entire object as JSON-encoded string in a single key and keep track of all Objects using a set (or list, if more appropriate). For example:
INCR id:users
SET user:{id} '{"name":"Fred","age":25}'
SADD users {id}
Generally speaking, this is probably the best method in most cases. If there are a lot of fields in the Object, your Objects are not nested with other Objects, and you tend to only access a small subset of fields at a time, it might be better to go with option 2.
Advantages: considered a "good practice." Each Object is a full-blown Redis key. JSON parsing is fast, especially when you need to access many fields for this Object at once. Disadvantages: slower when you only need to access a single field.
Store each Object's properties in a Redis hash.
INCR id:users
HMSET user:{id} name "Fred" age 25
SADD users {id}
Advantages: considered a "good practice." Each Object is a full-blown Redis key. No need to parse JSON strings. Disadvantages: possibly slower when you need to access all/most of the fields in an Object. Also, nested Objects (Objects within Objects) cannot be easily stored.
Store each Object as a JSON string in a Redis hash.
INCR id:users
HMSET users {id} '{"name":"Fred","age":25}'
This allows you to consolidate a bit and only use two keys instead of lots of keys. The obvious disadvantage is that you can't set the TTL (and other stuff) on each user Object, since it is merely a field in the Redis hash and not a full-blown Redis key.
Advantages: JSON parsing is fast, especially when you need to access many fields for this Object at once. Less "polluting" of the main key namespace. Disadvantages: About same memory usage as #1 when you have a lot of Objects. Slower than #2 when you only need to access a single field. Probably not considered a "good practice."
Store each property of each Object in a dedicated key.
INCR id:users
SET user:{id}:name "Fred"
SET user:{id}:age 25
SADD users {id}
According to the article above, this option is almost never preferred (unless the property of the Object needs to have specific TTL or something).
Advantages: Object properties are full-blown Redis keys, which might not be overkill for your app. Disadvantages: slow, uses more memory, and not considered "best practice." Lots of polluting of the main key namespace.
Overall Summary
Option 4 is generally not preferred. Options 1 and 2 are very similar, and they are both pretty common. I prefer option 1 (generally speaking) because it allows you to store more complicated Objects (with multiple layers of nesting, etc.) Option 3 is used when you really care about not polluting the main key namespace (i.e. you don't want there to be a lot of keys in your database and you don't care about things like TTL, key sharding, or whatever).
If I got something wrong here, please consider leaving a comment and allowing me to revise the answer before downvoting. Thanks! :)
| Redis | 16,375,188 | 359 |
I'm trying to answer two questions in a definitive list:
What are the underlying data structures used for Redis?
And what are the main advantages/disadvantages/use cases for each type?
So, I've read the Redis lists are actually implemented with linked lists. But for other types, I'm not able to dig up any information. Also, if someone were to stumble upon this question and not have a high level summary of the pros and cons of modifying or accessing different data structures, they'd have a complete list of when to best use specific types to reference as well.
Specifically, I'm looking to outline all types: string, list, set, zset and hash.
Oh, I've looked at these article, among others, so far:
http://redis.io/topics/data-types
http://redis.io/topics/data-types-intro
http://redis.io/topics/faq
| I'll try to answer your question, but I'll start with something that may look strange at first: if you are not interested in Redis internals you should not care about how data types are implemented internally. This is for a simple reason: for every Redis operation you'll find the time complexity in the documentation and, if you have the set of operations and the time complexity, the only other thing you need is some clue about memory usage (and because we do many optimizations that may vary depending on data, the best way to get these latter figures are doing a few trivial real world tests).
But since you asked, here is the underlying implementation of every Redis data type.
Strings are implemented using a C dynamic string library so that we don't pay (asymptotically speaking) for allocations in append operations. This way we have O(N) appends, for instance, instead of having quadratic behavior.
Lists are implemented with linked lists.
Sets and Hashes are implemented with hash tables.
Sorted sets are implemented with skip lists (a peculiar type of balanced trees).
But when lists, sets, and sorted sets are small in number of items and size of the largest values, a different, much more compact encoding is used. This encoding differs for different types, but has the feature that it is a compact blob of data that often forces an O(N) scan for every operation. Since we use this format only for small objects this is not an issue; scanning a small O(N) blob is cache oblivious so practically speaking it is very fast, and when there are too many elements the encoding is automatically switched to the native encoding (linked list, hash, and so forth).
But your question was not really just about internals, your point was What type to use to accomplish what?.
Strings
This is the base type of all the types. It's one of the four types but is also the base type of the complex types, because a List is a list of strings, a Set is a set of strings, and so forth.
A Redis string is a good idea in all the obvious scenarios where you want to store an HTML page, but also when you want to avoid converting your already encoded data. So for instance, if you have JSON or MessagePack you may just store objects as strings. In Redis 2.6 you can even manipulate this kind of object server side using Lua scripts.
Another interesting usage of strings is bitmaps, and in general random access arrays of bytes, since Redis exports commands to access random ranges of bytes, or even single bits. For instance check this good blog post: Fast Easy real time metrics using Redis.
Lists
Lists are good when you are likely to touch only the extremes of the list: near tail, or near head. Lists are not very good to paginate stuff, because random access is slow, O(N).
So good uses of lists are plain queues and stacks, or processing items in a loop using RPOPLPUSH with same source and destination to "rotate" a ring of items.
Lists are also good when we want just to create a capped collection of N items where usually we access just the top or bottom items, or when N is small.
Sets
Sets are an unordered data collection, so they are good every time you have a collection of items and it is very important to check for existence or size of the collection in a very fast way. Another cool thing about sets is support for peeking or popping random elements (SRANDMEMBER and SPOP commands).
Sets are also good to represent relations, e.g., "What are friends of user X?" and so forth. But other good data structures for this kind of stuff are sorted sets as we'll see.
Sets support complex operations like intersections, unions, and so forth, so this is a good data structure for using Redis in a "computational" manner, when you have data and you want to perform transformations on that data to obtain some output.
Small sets are encoded in a very efficient way.
Hashes
Hashes are the perfect data structure to represent objects, composed of fields and values. Fields of hashes can also be atomically incremented using HINCRBY. When you have objects such as users, blog posts, or some other kind of item, hashes are likely the way to go if you don't want to use your own encoding like JSON or similar.
However, keep in mind that small hashes are encoded very efficiently by Redis, and you can ask Redis to atomically GET, SET or increment individual fields in a very fast fashion.
Hashes can also be used to represent linked data structures, using references. For instance check the lamernews.com implementation of comments.
Sorted Sets
Sorted sets are the only other data structures, besides lists, to maintain ordered elements. You can do a number of cool stuff with sorted sets. For instance, you can have all kinds of Top Something lists in your web application. Top users by score, top posts by pageviews, top whatever, but a single Redis instance will support tons of insertion and get-top-elements operations per second.
Sorted sets, like regular sets, can be used to describe relations, but they also allow you to paginate the list of items and to remember the ordering. For instance, if I remember friends of user X with a sorted set I can easily remember them in order of accepted friendship.
Sorted sets are good for priority queues.
Sorted sets are like more powerful lists where inserting, removing, or getting ranges from the the middle of the list is always fast. But they use more memory, and are O(log(N)) data structures.
Conclusion
I hope that I provided some info in this post, but it is far better to download the source code of lamernews from http://github.com/antirez/lamernews and understand how it works. Many data structures from Redis are used inside Lamer News, and there are many clues about what to use to solve a given task.
Sorry for grammar typos, it's midnight here and too tired to review the post ;)
| Redis | 9,625,246 | 349 |
Hi I am using Laravel with Redis .When I am trying to access a key by get method then get following error "WRONGTYPE Operation against a key holding the wrong kind of value"
I am using following code to access the key value -
i use this code for get data from redis
$values = "l_messages";
$value = $redis->HGETALL($values);
print($value);
| Redis supports 6 data types. You need to know what type of value that a key maps to, as for each data type, the command to retrieve it is different.
Here are the commands to retrieve key value(s):
if value is of type string -> GET <key>
if value is of type hash -> HGET or HMGET or HGETALL <key>
if value is of type lists -> lrange <key> <start> <end>
if value is of type sets -> smembers <key>
if value is of type sorted sets -> ZRANGEBYSCORE <key> <min> <max>
if value is of type stream -> xread count <count> streams <key> <ID>. https://redis.io/commands/xread
Use the TYPE command to check the type of value a key is mapping to:
type <key>
| Redis | 37,953,019 | 346 |
I have a Linux server with Redis installed and I want to connect to it via command line from my local Linux machine.
Is it possible to install redis-cli only (without redis-server and other tools)?
If I just copy redis-cli file to my local machine and run it, I have the following error:
./redis-cli: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by ./redis-cli)
| Ubuntu (tested on 14.04) has package called redis-tools which contains redis-cli among other tools.
To install it type:
sudo apt-get install redis-tools
Note that on Ubuntu 16.04+ the command is a little bit different:
sudo apt install redis-tools
| Redis | 21,795,340 | 313 |
I'm trying to follow the Redis installation process that was discuss in this article of digital ocean, for in WSL(Windows Sub-System for Linux). The Ubuntu version installed is Ubuntu 18.04.
Everything in redis installation is fine but when I tried to run this sudo systemctl start redis I got this message.
System has not been booted with systemd as init system (PID 1). Can't operate.
Any Idea on what should I do with that?
| Instead, use: sudo service redis-server start
I had the same problem, stopping/starting other services from within Ubuntu on WSL. This worked, where systemctl did not.
And one could reasonably wonder, "how would you know that the service name was 'redis-server'?" You can see them using service --status-all
| Redis | 52,197,246 | 310 |
What is the standard naming convention for keys in redis? I've seen values separated by :, but I'm not sure what the standard convention is.
For a user, would you do something like:user:00
If the user's id was 00
Are you able to query for just the beginning of the key to return all users?
I'm mainly just hoping to avoid any future problems by researching how that work for people and why they chose them.
|
What are the normal naming convention for keys in redis? I've seen
values separated by : but I'm not sure what the normal convention is,
or why.
Yes, colon sign : is a convention when naming keys. In this tutorial on redis website is stated: Try to stick with a schema. For instance "object-type:id:field" can be
a nice idea, like in "user:1000:password". I like to use dots for
multi-words fields, like in "comment:1234:reply.to".
Are you able to query for just the beginning of the key to return all
users?
If you mean something like directly querying for all keys which starts with user: there is a keys command for that. This command should be however used only for debugging purpose since it's O(N) because it's searching through all keys stored in database.
More appropriate solution for this problem is to create dedicated key, let's name it users, which will store all the users keys, for example, in list or set data structure.
| Redis | 6,965,451 | 290 |
I can't see any difference between Redis and caching technologies like Velocity or the Enterprise Library Caching Framework. You're effectively just adding objects to an in-memory data store using a unique key. There do not seem to be any relational semantics...
What am I missing?
| No, Redis is much more than a cache.
Like a cache, Redis stores key-value pairs. But unlike a cache, Redis lets you operate on the values. There are 5 data types in Redis - Strings, Sets, Hashs, Lists and Sorted Sets. Each data type exposes various operations.
The best way to understand Redis is to model an application without thinking about how you are going to store it in a database.
Lets say we want to build StackOverflow.com. To keep it simple, we need Questions, Answers, Tags and Users.
Modeling Questions, Users and Answers
Each object can be modeled as a Map. For example, a Question is a map with fields {id, title, date_asked, votes, asked_by, status}. Similarly, an Answer is a map with fields {id, question_id, answer_text, answered_by, votes, status}. Similarly, we can model a user object.
Each of these objects can be directly stored in Redis as a Hash. To generate unique ids, you can use the atomic increment command. Something like this:
$ HINCRBY unique_ids question 1
(integer) 1
$ HMSET question:1 title "Is Redis just a cache?" asked_by 12 votes 0
OK
$ HINCRBY unique_ids answer 1
(integer) 1
$ HMSET answer:1 question_id 1 answer_text "No, its a lot more" answered_by 15 votes 1
OK
Handling Up Votes
Now, every time someone upvotes a question or an answer, you just need to do this:
$ HINCRBY question:1 votes 1
(integer) 1
$ HINCRBY question:1 votes 1
(integer) 2
List of Questions for Homepage
Next, we want to store the most recent questions to display on the home page. If you were writing a .NET or a Java program, you would store the questions in a List. Turns out, that is the best way to store this in Redis as well.
Every time someone asks a question, we add its id to the list:
$ lpush questions question:1
(integer) 1
$ lpush questions question:2
(integer) 1
Now, when you want to render your homepage, you ask Redis for the most recent 25 questions:
$ lrange questions 0 24
1) "question:100"
2) "question:99"
3) "question:98"
4) "question:97"
5) "question:96"
...
25) "question:76"
Now that you have the ids, retrieve items from Redis using pipelining and show them to the user.
Questions by Tags, Sorted by Votes
Next, we want to retrieve questions for each tag. But SO allows you to see top voted questions, new questions or unanswered questions under each tag.
To model this, we use Redis' Sorted Set feature. A Sorted Set allows you to associate a score with each element. You can then retrieve elements based on their scores.
Lets go ahead and do this for the Redis tag:
$ zadd questions_by_votes_tagged:redis 2 question:1
(integer) 1
$ zadd questions_by_votes_tagged:redis 10 question:2
(integer) 1
$ zadd questions_by_votes_tagged:redis 5 question:613
(integer) 1
$ zrange questions_by_votes_tagged:redis 0 5
1) "question:1"
2) "question:613"
3) "question:2"
$ zrevrange questions_by_votes_tagged:redis 0 5
1) "question:2"
2) "question:613"
3) "question:1"
What did we do over here? We added questions to a sorted set, and associated a score (number of votes) to each question. Each time a question gets upvoted, we will increment its score. And when a user clicks "Questions tagged Redis, sorted by votes", we just do a zrevrange and get back the top questions.
Realtime Questions without refreshing page
And finally, a bonus feature. If you keep the questions page opened, SO will notify you when a new question is added. How can Redis help over here?
Redis has a pub-sub model. You can create channels, for example "channel_questions_tagged_redis". You then subscribe users to a particular channel. When a new question is added, you would publish a message to that channel. All users would then get the message. You will have to use a web technology like web sockets or comet to actually deliver the message to the browser, but Redis helps you with all the plumbing on the server side.
Persistence, Reliability etc.
Unlike a cache, Redis persists data on the hard disk. You can have a master-slave setup to provide better reliability. To learn more, go through Persistence and Replication topics over here.
| Redis | 10,137,857 | 287 |
Using homebrew to install Redis but when I try to ping Redis it shows this error:
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Note :
I tried to turn off firewall and edit conf file but still cannot ping.
I am using macOS Sierra and homebrew version 1.1.11
| After installing redis, type from terminal:
redis-server
And Redis-Server will be started
| Redis | 42,857,551 | 284 |
how to check Redis server version?
I've found in Redis site this command:
$ redis-server
and that should give me (according to the site):
[28550] 01 Aug 19:29:28 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'
[28550] 01 Aug 19:29:28 * Server started, Redis version 2.2.12
[28550] 01 Aug 19:29:28 * The server is now ready to accept connections on port 6379
... and so forth ...
but I get this instead:
[8719] 04 Feb 14:51:09.009 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
[8719] 04 Feb 14:51:09.009 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max clients configuration to 3984.
[8719] 04 Feb 14:51:09.009 # Creating Server TCP listening socket *:6379: bind: Address already in use
Which mean I need to configure it, but all I want is the version!
So how do I check Redis server version?
|
$ redis-server --version
gives you the version.
| Redis | 21,555,942 | 283 |
I ran this command to access my redis server.
telnet 127.0.0.1 6379
What is the command to show all of my databases?
| There is no command to do it (like you would do it with MySQL for instance). The number of Redis databases is fixed, and set in the configuration file. By default, you have 16 databases. Each database is identified by a number (not a name).
You can use the following command to know the number of databases:
CONFIG GET databases
1) "databases"
2) "16"
You can use the following command to list the databases for which some keys are defined:
INFO keyspace
# Keyspace
db0:keys=10,expires=0
db1:keys=1,expires=0
db3:keys=1,expires=0
Please note that you are supposed to use the "redis-cli" client to run these commands, not telnet. If you want to use telnet, then you need to run these commands formatted using the Redis protocol.
For instance:
*2
$4
INFO
$8
keyspace
$79
# Keyspace
db0:keys=10,expires=0
db1:keys=1,expires=0
db3:keys=1,expires=0
You can find the description of the Redis protocol here:
http://redis.io/topics/protocol
| Redis | 12,802,726 | 280 |
I have URL and PORT of remote Redis server. I am able to write into Redis from Scala. However I want to connect to remote Redis via terminal using redis-server or something similar in order to make several call of hget, get, etc. (I can do it with my locally installed Redis without any problem).
| redis-cli -h XXX.XXX.XXX.XXX -p YYYY
xxx.xxx.xxx.xxx is the IP address and yyyy is the port
EXAMPLE from my dev environment
redis-cli -h 10.144.62.3 -p 30000
REDIS CLI COMMANDS
Host, port, password and database By default redis-cli connects to the
server at 127.0.0.1 port 6379. As you can guess, you can easily change
this using command line options. To specify a different host name or
an IP address, use -h. In order to set a different port, use -p.
redis-cli -h redis15.localnet.org -p 6390 ping
| Redis | 40,678,865 | 277 |
So, I've come to a place where I wanted to segment the data I store in redis into separate databases as I sometimes need to make use of the keys command on one specific kind of data, and wanted to separate it to make that faster.
If I segment into multiple databases, everything is still single threaded, and I still only get to use one core. If I just launch another instance of Redis on the same box, I get to use an extra core. On top of that, I can't name Redis databases, or give them any sort of more logical identifier. So, with all of that said, why/when would I ever want to use multiple Redis databases instead of just spinning up an extra instance of Redis for each extra database I want? And relatedly, why doesn't Redis try to utilize an extra core for each extra database I add? What's the advantage of being single threaded across databases?
| You don't want to use multiple databases in a single redis instance. As you noted, multiple instances lets you take advantage of multiple cores. If you use database selection you will have to refactor when upgrading. Monitoring and managing multiple instances is not difficult nor painful.
Indeed, you would get far better metrics on each db by segregation based on instance. Each instance would have stats reflecting that segment of data, which can allow for better tuning and more responsive and accurate monitoring. Use a recent version and separate your data by instance.
As Jonaton said, don't use the keys command. You'll find far better performance if you simply create a key index. Whenever adding a key, add the key name to a set. The keys command is not terribly useful once you scale up since it will take significant time to return.
Let the access pattern determine how to structure your data rather than store it the way you think works and then working around how to access and mince it later. You will see far better performance and find the data consuming code often is much cleaner and simpler.
Regarding single threaded, consider that redis is designed for speed and atomicity. Sure actions modifying data in one db need not wait on another db, but what if that action is saving to the dump file, or processing transactions on slaves? At that point you start getting into the weeds of concurrency programming.
By using multiple instances you turn multi threading complexity into a simpler message passing style system.
| Redis | 16,221,563 | 268 |
I am new to message brokers like RabbitMQ which we can use to create tasks / message queues for a scheduling system like Celery.
Now, here is the question:
I can create a table in PostgreSQL which can be appended with new tasks and consumed by the consumer program like Celery.
Why on earth would I want to setup a whole new tech for this like RabbitMQ?
Now, I believe scaling cannot be the answer since our database like PostgreSQL can work in a distributed environment.
I googled for what problems does the database poses for the particular problem, and I found:
polling keeps the database busy and low performing
locking of the table -> again low performing
millions of rows of tasks -> again, polling is low performing
Now, how does RabbitMQ or any other message broker like that solves these problems?
Also, I found out that AMQP protocol is what it follows. What's great in that?
Can Redis also be used as a message broker? I find it more analogous to Memcached than RabbitMQ.
Please shed some light on this!
| Rabbit's queues reside in memory and will therefore be much faster than implementing this in a database. A (good)dedicated message queue should also provide essential queuing related features such as throttling/flow control, and the ability to choose different routing algorithms, to name a couple(rabbit provides these and more). Depending on the size of your project, you may also want the message passing component separate from your database, so that if one component experiences heavy load, it need not hinder the other's operation.
As for the problems you mentioned:
polling keeping the database busy and low performing: Using Rabbitmq, producers can push updates to consumers which is far more performant than polling. Data is simply sent to the consumer when it needs to be, eliminating the need for wasteful checks.
locking of the table -> again low performing: There is no table to lock :P
millions of rows of task -> again polling is low performing: As mentioned above, Rabbitmq will operate faster as it resides RAM, and provides flow control. If needed, it can also use the disk to temporarily store messages if it runs out of RAM. After 2.0, Rabbit has significantly improved on its RAM usage. Clustering options are also available.
In regards to AMQP, I would say a really cool feature is the "exchange", and the ability for it to route to other exchanges. This gives you more flexibility and enables you to create a wide array of elaborate routing typologies which can come in very handy when scaling. For a good example, see:
(source: springsource.com)
and: http://blog.springsource.org/2011/04/01/routing-topologies-for-performance-and-scalability-with-rabbitmq/
Finally, in regards to Redis, yes, it can be used as a message broker, and can do well. However, Rabbitmq has more message queuing features than Redis, as rabbitmq was built from the ground up to be a full-featured enterprise-level dedicated message queue. Redis on the other hand was primarily created to be an in-memory key-value store(though it does much more than that now; its even referred to as a swiss army knife). Still, I've read/heard many people achieving good results with Redis for smaller sized projects, but haven't heard much about it in larger applications.
Here is an example of Redis being used in a long-polling chat implementation: http://eflorenzano.com/blog/2011/02/16/technology-behind-convore/
| Redis | 13,005,410 | 268 |
Is there a way to print the number of keys in Redis?
I am aware of
keys *
But that seems slightly heavy weight. - Given that Redis is a key value store maybe this is the only way to do it. But I would still like to see something along the lines of
count keys *
| You can issue the INFO command, which returns information and statistics about the server. See here for an example output.
As mentioned in the comments by mVChr, you can use info keyspace directly on the redis-cli.
redis> INFO
# Server
redis_version:6.0.6
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:b63575307aaffe0a
redis_mode:standalone
os:Linux 5.4.0-1017-aws x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:9.3.0
process_id:2854672
run_id:90a5246f10e0aeb6b02cc2765b485d841ffc924e
tcp_port:6379
uptime_in_seconds:2593097
uptime_in_days:30
hz:10
configured_hz:10
lru_clock:4030200
executable:/usr/local/bin/redis-server
| Redis | 9,888,387 | 243 |
Trying to grasp some basics of Redis I came across an interesting blog post .
The author states:
Redis is single-threaded with epoll/kqueue and scale indefinitely in terms of I/O concurrency.
I surely misunderstand the whole threading thing, because I find this statement puzzling. If a program is single-threaded, how does it do anything concurrently? Why it is so great that Redis operations are atomic, if the server is single-threaded anyway?
Could anybody please shed some light on the issue?
| Well it depends on how you define concurrency.
In server-side software, concurrency and parallelism are often considered as different concepts. In a server, supporting concurrent I/Os means the server is able to serve several clients by executing several flows corresponding to those clients with only one computation unit. In this context, parallelism would mean the server is able to perform several things at the same time (with multiple computation units), which is different.
For instance a bartender is able to look after several customers while he can only prepare one beverage at a time. So he can provide concurrency without parallelism.
This question has been debated here:
What is the difference between concurrency and parallelism?
See also this presentation from Rob Pike.
A single-threaded program can definitely provide concurrency at the I/O level by using an I/O (de)multiplexing mechanism and an event loop (which is what Redis does).
Parallelism has a cost: with the multiple sockets/multiple cores you can find on modern hardware, synchronization between threads is extremely expensive. On the other hand, the bottleneck of an efficient storage engine like Redis is very often the network, well before the CPU. Isolated event loops (which require no synchronization) are therefore seen as a good design to build efficient, scalable, servers.
The fact that Redis operations are atomic is simply a consequence of the single-threaded event loop. The interesting point is atomicity is provided at no extra cost (it does not require synchronization). It can be exploited by the user to implement optimistic locking and other patterns without paying for the synchronization overhead.
| Redis | 10,489,298 | 242 |
I currently have a live redis server running on a cloud instance and I want to migrate this redis server to a new cloud instance and use that instance as my new redis server. If it were MySQL, I would export the DB from the old server and import it into the new server. How should I do this with redis?
P.S.: I'm not looking to set-up replication. I want to completely migrate the redis server to a new instance.
| First, create a dump on server A.
A$ redis-cli
127.0.0.1:6379> CONFIG GET dir
1) "dir"
2) "/var/lib/redis/"
127.0.0.1:6379> SAVE
OK
This ensures dump.rdb is completely up-to-date, and shows us where it is stored (/var/lib/redis/dump.rdb in this case). dump.rdb is also periodically written to disk automatically.
Next, copy it to server B:
A$ scp /var/lib/redis/dump.rdb myuser@B:/tmp/dump.rdb
Stop the Redis server on B, copy dump.rdb (ensuring permissions are the same as before), then start.
B$ sudo service redis-server stop
B$ sudo cp /tmp/dump.rdb /var/lib/redis/dump.rdb
B$ sudo chown redis: /var/lib/redis/dump.rdb
B$ sudo service redis-server start
The version of Redis on B must be greater or equal than that of A, or you may hit compatibility issues.
| Redis | 6,004,915 | 234 |
I have not used Redis yet, but I have heard about it and plan to try using it for caching data.
I have heard that Redis uses memory as a cache store database. What's the point of Redis, since I can use an object or dictionary to store data? Like this:
var cache = {
key: {
},
key: {
}
...
}
What are the advantages of using Redis?
| Redis is a remote data structure server. It is certainly slower than just storing the data in local memory (since it involves socket roundtrips to fetch/store the data). However, it also brings some interesting properties:
Redis can be accessed by all the processes of your applications, possibly running on several nodes (something local memory cannot achieve).
Redis memory storage is quite efficient, and done in a separate process. If the application runs on a platform whose memory is garbage collected (node.js, java, etc ...), it allows handling a much bigger memory cache/store. In practice, very large heaps do not perform well with garbage collected languages.
Redis can persist the data on disk if needed.
Redis is a bit more than a simple cache: it provides various data structures, various item eviction policies, blocking queues, pub/sub, atomicity, Lua scripting, etc ...
Redis can replicate its activity with a master/slave mechanism in order to implement high-availability.
Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required.
| Redis | 19,477,821 | 222 |
I have read great things about key/value stores such as Redis but I can't seem to figure out when it's time to use it in an application.
Say I am architecting a web-based application; I know what stack I am going to use for the front-end, back-end, database(s), etc..what are some scenarios where I would go "oh we also need Redis for X,Y, or Z."
I would appreciate node.js examples as well as non-node.js examples.
|
I can't seem to figure out when it's time to use it in an application.
I would recommend you to read this tutorial which contains also use cases. Since Redis is rather memory oriented it's really good for frequently updated real-time data, such as session store, state database, statistics, caching and its advanced data structures offers versatility to many other scenarios.
Redis, however, isn't NoSQL replacement for classic relational databases since it doesn't support many standard features of RDBMS world such as querying of your data which might slow it down. Replacements are rather document databases like MongoDB or CouchDB and Redis is great at supplementing specific functionality where speed and support for advanced data structures comes handy.
| Redis | 7,535,184 | 220 |
It's widely mentioned that Redis is "Blazing Fast" and mongoDB is fast too. But, I'm having trouble finding actual numbers comparing the results of the two. Given similar configurations, features and operations (and maybe showing how the factor changes with different configurations and operations), etc, is Redis 10x faster?, 2x faster?, 5x faster?
I'm ONLY speaking of performance. I understand that mongoDB is a different tool and has a richer feature set. This is not the "Is mongoDB better than Redis" debate. I'm asking, by what margin does Redis outperform mongoDB?
At this point, even cheap benchmarks are better than no benchmarks.
| Rough results from the following benchmark: 2x write, 3x read.
Here's a simple benchmark in python you can adapt to your purposes, I was looking at how well each would perform simply setting/retrieving values:
#!/usr/bin/env python2.7
import sys, time
from pymongo import Connection
import redis
# connect to redis & mongodb
redis = redis.Redis()
mongo = Connection().test
collection = mongo['test']
collection.ensure_index('key', unique=True)
def mongo_set(data):
for k, v in data.iteritems():
collection.insert({'key': k, 'value': v})
def mongo_get(data):
for k in data.iterkeys():
val = collection.find_one({'key': k}, fields=('value',)).get('value')
def redis_set(data):
for k, v in data.iteritems():
redis.set(k, v)
def redis_get(data):
for k in data.iterkeys():
val = redis.get(k)
def do_tests(num, tests):
# setup dict with key/values to retrieve
data = {'key' + str(i): 'val' + str(i)*100 for i in range(num)}
# run tests
for test in tests:
start = time.time()
test(data)
elapsed = time.time() - start
print "Completed %s: %d ops in %.2f seconds : %.1f ops/sec" % (test.__name__, num, elapsed, num / elapsed)
if __name__ == '__main__':
num = 1000 if len(sys.argv) == 1 else int(sys.argv[1])
tests = [mongo_set, mongo_get, redis_set, redis_get] # order of tests is significant here!
do_tests(num, tests)
Results for with mongodb 1.8.1 and redis 2.2.5 and latest pymongo/redis-py:
$ ./cache_benchmark.py 10000
Completed mongo_set: 10000 ops in 1.40 seconds : 7167.6 ops/sec
Completed mongo_get: 10000 ops in 2.38 seconds : 4206.2 ops/sec
Completed redis_set: 10000 ops in 0.78 seconds : 12752.6 ops/sec
Completed redis_get: 10000 ops in 0.89 seconds : 11277.0 ops/sec
Take the results with a grain of salt of course! If you are programming in another language, using other clients/different implementations, etc your results will vary wildy. Not to mention your usage will be completely different! Your best bet is to benchmark them yourself, in precisely the manner you are intending to use them. As a corollary you'll probably figure out the best way to make use of each. Always benchmark for yourself!
| Redis | 5,252,577 | 220 |
I've been playing with redis (and add some fun with it) during the last fews days and I'd like to know if there is a way to empty the db (remove the sets, the existing key....) easily.
During my tests, I created several sets with a lot of members, even created sets that I do not remember the name (how can I list those guys though ?).
Any idea about how to get rid of all of them ?
| You have two options:
FLUSHDB - clears currently active database
FLUSHALL - clears all the existing databases
| Redis | 5,756,067 | 203 |
I have worked quite a bit with memcached the last weeks and just found out about Redis. When I read this part of their readme, I suddenly got a warm, cozy feeling in my stomach:
Redis can be used as a memcached on steroids because is as fast as
memcached but with a number of
features more.
Like memcached, Redis also supports setting timeouts to keys so
that this key will be automatically
removed when a given amount of time
passes.
This sounds amazing. I'd also found this page with benchmarks: http://www.ruturaj.net/redis-memcached-tokyo-tyrant-mysql-comparison
So, honestly - Is memcache really that old dinousaur that is a bad choice from a performance perspective when compared to this newcomer called Redis?
I haven't heard lot about Redis previously, thereby the approach for my question!
| Depends on what you need, in general I think that:
You should not care too much about performances. Redis is faster per core with small values, but memcached is able to use multiple cores with a single executable and TCP port without help from the client. Also memcached is faster with big values in the order of 100k. Redis recently improved a lot about big values (unstable branch) but still memcached is faster in this use case. The point here is: nor one or the other will likely going to be your bottleneck for the query-per-second they can deliver.
You should care about memory usage. For simple key-value pairs memcached is more memory efficient. If you use Redis hashes, Redis is more memory efficient. Depends on the use case.
You should care about persistence and replication, two features only available in Redis. Even if your goal is to build a cache it helps that after an upgrade or a reboot your data are still there.
You should care about the kind of operations you need. In Redis there are a lot of complex operations, even just considering the caching use case, you often can do a lot more in a single operation, without requiring data to be processed client side (a lot of I/O is sometimes needed). This operations are often as fast as plain GET and SET. So if you don't need just GET/SET but more complex things Redis can help a lot (think at timeline caching).
Without an use case is hard to pick the right now, but I think that for a lot of things Redis makes sense since even when you don't want to use it as a DB, being a lot more capable you can solve more problems, not just caching but even messaging, ranking, and so forth.
P.s. of course I could be biased since I'm the lead developer of the Redis project.
| Redis | 2,873,249 | 193 |
I tried to run brew install redis-cli and googled, but found nothing. Any ideas?
| If you install redis with homebrew, you can see what's in the package like this:
brew install redis
brew ls redis
You will see that it only installs very few files indeed anyway:
/usr/local/Cellar/redis/3.2.3/bin/redis-benchmark
/usr/local/Cellar/redis/3.2.3/bin/redis-check-aof
/usr/local/Cellar/redis/3.2.3/bin/redis-check-rdb
/usr/local/Cellar/redis/3.2.3/bin/redis-cli
/usr/local/Cellar/redis/3.2.3/bin/redis-sentinel
/usr/local/Cellar/redis/3.2.3/bin/redis-server
/usr/local/Cellar/redis/3.2.3/homebrew.mxcl.redis.plist
Or, you can look directly in homebrew's Cellar, like this:
ls -lR /usr/local/Cellar/redis/3.2.3
total 40
-rw-r--r-- 1 mark admin 1487 2 Aug 10:00 COPYING
-rw-r--r-- 1 mark admin 376 9 Aug 10:34 INSTALL_RECEIPT.json
-rw-r--r-- 1 mark admin 6834 2 Aug 10:00 README.md
drwxr-xr-x 8 mark admin 272 2 Aug 10:00 bin
-rw-r--r-- 1 mark admin 785 9 Aug 10:34 homebrew.mxcl.redis.plist
/usr/local/Cellar/redis/3.2.3/bin:
total 3440
-r-xr-xr-x 1 mark admin 67668 2 Aug 10:00 redis-benchmark
-r-xr-xr-x 1 mark admin 13936 2 Aug 10:00 redis-check-aof
-r-xr-xr-x 1 mark admin 768704 2 Aug 10:00 redis-check-rdb
-r-xr-xr-x 1 mark admin 129712 2 Aug 10:00 redis-cli
lrwxr-xr-x 1 mark admin 12 2 Aug 10:00 redis-sentinel -> redis-server
-r-xr-xr-x 1 mark admin 768704 2 Aug 10:00 redis-server
So, a lot of it is the licence, README and, of the 6 binaries, one is a symlink anyway. So it is not a heavy-weight installation with loads of services and config files anyway.
By the way, you could always pull and run the docker redis-cli without installing anything:
docker run --rm -it redis:alpine redis-cli -h 192.168.0.8 # change to your Redis host's IP
If you actually just want to install the very least software you possibly can, you don't actually have to install anything! The Redis protocol is pretty simple, so you can build up a command in bash and send it yourself like this:
#!/bin/bash
################################################################################
# redis.sh
# Very, very simplistic Redis client in bash
# Mark Setchell
# Usage:
# redis.sh SET answer 42
#
# Ref: https://redis.io/topics/mass-insert
################################################################################
if [ $# -lt 2 ] ; then
echo "Usage: redis.sh SET answer 42" >&2
exit 1
fi
# Build protocol string
protocol="*$#\r\n"
for var in "$@" ; do
protocol+="$"
protocol+="${#var}\r\n${var}\r\n"
done
# Send to Redis on default port on local host - but you can change it
printf "$protocol" > /dev/tcp/localhost/6379
| Redis | 39,704,273 | 187 |
I understand redis sentinel is a way of configuring HA (high availability) among multiple redis instances. As I see, there is one redis instance actively serving the client requests at any given time. There are two additional servers are on standby (waiting for a failure to happen, so one of them can be in action again).
Is it waste of resources?
Is there a better way of using full use of the resources available?
Is Redis clustering an alternative to Redis sentinel?
I already looked up redis documentation for sentinel and clustering, can somebody having experience explain please.
UPDATE
OK. In my real deployment scenario I have two servers dedicated for redis. I have another server my Jboss server is running. The application running in Jboss is configured to connect to redis master server(M).
Failover scenario
Ideally, I think when Master cache server fails (either Redis process goes down or machine failure) the application in Jboss needs to connect to Slave cache server. How would I configure the redis servers to achieve this?
+--------+ +--------+
| Master |---------| Slave |
| | | |
+--------+ +--------+
Configuration: quorum = 1
| First, lets talk sentinel.
Sentinel manages the failover, it doesn't configure Redis for HA. It is an important distinction. Second, the diagram you posted is actually a bad setup - you don't want to run Sentinel on the same node as the Redis nodes it is managing. When you lose that host you lose both.
As to "Is it waste of resources?" it depends on your use case. You don't need three Redis nodes in that setup, you only need two. Three increases your redundancy, but is not required. If you need the added redundancy then it isn't a waste of resources. If you don't need redundancy then you just run a single Redis instance and call it good - as running more would be "wasted".
Another reason for running two slaves would be to split reads. Again, if you need it then it wouldn't be a waste.
As to "Is there a better way of using full use of the resources available?" we can't answer that as it is far too dependent on your specific scenario and code. That said if the amount of data to store is "small" and the command rate is not exceedingly high, then remember you don't need to dedicate a host to Redis.
Now for "Is Redis clustering an alternative to Redis sentinel?".
It really depends entirely on your use case. Redis Cluster is not an HA solution - it is a multiple writer/larger-than-ram solution. If your goal is just HA then it likely won't be suitable for you. Redis Cluster comes with limitations, particularly around multi-key operations, so it isn't necessarily a straightforward "just use cluster" operation.
If you think having three hosts running Redis (and three running sentinel) is wasteful, you'll likely hold Cluster to be even more so as it does require more resources.
The questions you've asked are probably too broad and opinion-based to survive as written. If you have a specific case/problem you are working out please update with that so we can provide specific assistance and information.
Update for specifics:
For proper failover management in your scenario I would go with 3 sentinels, one running on your JBoss server. If you have 3 JBoss nodes then go with one on each. I'd have a Redis pod (master+slave) on separate nodes, and let sentinel manage the failover.
From there it is a matter of wiring up JBoss/Jedis to use Sentinel for it's information and connection management. As I don't use those a quick search turns up that Jedis has the support for it, you just need to configure it correctly. Some examples I found are at Looking for an example of Jedis with Sentinel and https://github.com/xetorthio/jedis/issues/725 which talk about JedisSentinelPool being the route for using a pool.
When Sentinel executes a failover the clients will be disconnected and Jedis will (should?) handle the reconnection by asking the Sentinels who the current master is.
| Redis | 31,143,072 | 180 |
I can ping pong Redis on the server:
# redis-cli ping
PONG
But remotely, I got problems:
$ src/redis-cli -h REMOTE.IP ping
Could not connect to Redis at REMOTE.IP:6379: Connection refused
In config, I got the standard port:
# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
So maybe I should open port 6379 on the remote Ubuntu machine? How do I do it?
| Did you set the bind option to allow remote access on the redis server?
Before (file /etc/redis/redis.conf)
bind 127.0.0.1
After
bind 0.0.0.0
and run sudo service redis-server restart to restart the server. If that's not the problem, you might want to check any firewalls that might block the access.
Important: If you don't use a firewall (iptables, ufw..) to control who connects to the port in use, ANYONE can connect to this Redis instance. Without using Redis' AUTH that means anyone can access/change/delete your data. Be safe!
| Redis | 19,091,087 | 173 |
I'm working with redis on my local machine so I dont really need to set up a password to connect to the server with my php client (I'm using predis as a client). However, I'm moving my app to a live server, so I want to set up a password to connect to my redis server.
I have few questions:
I checked all over the internet about how to set up the password and it looks like I need to add the password in the redis.conf. I couldnt find though what I should add exactly to the configuration file to set up the password.
also in predis how should I add the password. I'm using the following array of parameters to connect to the redis server
$my_server = array('host' => '127.0.0.1','port' =>
6379,'database' => 1);
should I add the password this way?
> $my_server = array('host' => '127.0.0.1','port' =>
> 6379,'database' => 1,'password'=>password);
last question, I'm trying to stop my redis-server on the live server. Every time I enter the following command , I keep getting the same error message
redis-server stop
[23925] 23 Sep 20:23:03 # Fatal error, can't open config file 'stop'
usually on my local machine I enter
/etc/init.d/redis-server stop
to stop redis server but its not working on my live server since there is no process called redis-server in my /etc/init.d
| To set the password, edit your redis.conf file, find this line
# requirepass foobared
Then uncomment it and change foobared to your password. Make sure you choose something pretty long, 32 characters or so would probably be good, it's easy for an outside user to guess upwards of 150k passwords a second, as the notes in the config file mention.
To authenticate with your new password using predis, the syntax you have shown is correct. Just add password as one of the connection parameters.
To shut down redis... check in your config file for the pidfile setting, it will probably be
pidfile /var/run/redis.pid
From the command line, run:
cat /var/run/redis.pid
That will give you the process id of the running server, then just kill the process using that pid:
kill 3832
Update
I also wanted to add, you could also make the /etc/init.d/redis-server stop you're used to work on your live server. All those files in /etc/init.d/ are just shell scripts, take the redis-server script off your local server, and copy it to the live server in the same location, and then just look what it does with vi or whatever you like to use, you may need to modify some paths and such, but it should be pretty simple.
| Redis | 7,537,905 | 171 |
I want to use redis-py for caching some data, but I can't find a suitable explanation of the difference between redis.StrictRedis() and redis.Redis(). Are they equivalent?
In addition, I can't find any clear documentation about redis.StrictRedis()'s arguments in Redis Python Docs.
Any idea?
| EDIT: They are now equivalent:
redis-py 3.0 drops support for the legacy "Redis" client class.
"StrictRedis" has been renamed to "Redis" and an alias named
"StrictRedis" is provided so that users previously using "StrictRedis"
can continue to run unchanged.
Original answer:
This seems pretty clear:
redis-py exposes two client classes that implement these commands
The StrictRedis class attempts to adhere to the official command syntax.
and
In addition to the changes above, the Redis class, a subclass of StrictRedis,
overrides several other commands to provide backwards compatibility with older
versions of redis-py
Do you need backwards compatibility? Use Redis. Don't care? Use StrictRedis.
2017-03-31
Here are the specifics of the backwards compatibility, from the github.com link cited:
In addition to the changes above, the Redis class, a subclass of StrictRedis, overrides several other commands to provide backwards compatibility with older versions of redis-py:
LREM: Order of 'num' and 'value' arguments reversed such that 'num' can provide a default value of zero.
ZADD: Redis specifies the 'score' argument before 'value'. These were swapped accidentally when being implemented and not discovered until after people were already using it. The Redis class expects *args in the form of: name1, score1, name2, score2, ...
SETEX: Order of 'time' and 'value' arguments reversed.
| Redis | 19,021,765 | 168 |
I like to use verbose names in Redis, for instance set-allBooksBelongToUser:$userId.
Is this ok or does that impact performance?
| The key you're talking about using isn't really all that long.
The example key you give is for a set, set lookup methods are O(1). The more complex operations on a set (SDIFF, SUNION, SINTER) are O(N). Chances are that populating $userId was a more expensive operation than using a longer key.
Redis comes with a benchmark utility called redis-benchmark, if you modify the "GET" test in src/redis-benchmark.c so that they key is just "foo", you can run the short key test after a make install:
diff --git a/src/redis-benchmark.c b/src/redis-benchmark.c
--- a/src/redis-benchmark.c
+++ b/src/redis-benchmark.c
@@ -475,11 +475,11 @@
benchmark("MSET (10 keys)",cmd,len);
free(cmd);
- len = redisFormatCommand(&cmd,"SET foo:rand:000000000000 %s",data);
+ len = redisFormatCommand(&cmd,"SET foo %s",data);
benchmark("SET",cmd,len);
free(cmd);
- len = redisFormatCommand(&cmd,"GET foo:rand:000000000000");
+ len = redisFormatCommand(&cmd,"GET foo");
benchmark("GET",cmd,len);
free(cmd);
Here's the GET test speed for 3 subsequent runs of the short key "foo":
59880.24 requests per second
58139.53 requests per second
58479.53 requests per second
Here's the GET test speed after modifying the source again and changing the key to "set-allBooksBelongToUser:1234567890":
60240.96 requests per second
60606.06 requests per second
58479.53 requests per second
Changing the key yet again to "ipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumloreipsumlorem:1234567890" gives this:
58479.53 requests per second
58139.53 requests per second
56179.77 requests per second
So even really really long keys don't have a large impact on the speed of redis. And this is on GET, a O(1) operation. More complex operations would be even less sensitive to this.
I think that having keys that clearly identify what values they hold greatly outweighs any minuscule speed performance you'd get out of abbreviated keys.
If you wanted to take this further, there's also a -r [keyspacelen] parameter on the redis-benchmark utility that lets it create random keys (as long as they have ':rand:' in them), you could just increase the size of the prefix in the testing code to whatever length you wanted.
| Redis | 6,320,739 | 166 |
Is it currently only possible to expire an entire key/value pair? What if I want to add values to a List type structure and have them get auto removed 1 hour after insertion. Is that currently possible, or would it require running a cron job to do the purging manually?
| There is a common pattern that solves this problem quite well.
Use sorted sets, and use a timestamp as the score. It's then trivial to delete items by score range, which could be done periodically, or only on every write, with reads always ignoring the out of range elements, by reading only a range of scores.
More here: https://groups.google.com/forum/#!topic/redis-db/rXXMCLNkNSs
| Redis | 7,577,923 | 163 |
I have a very small data saved in Redis and the following is working as expected that will allow me to download all keys.
redis-cli keys *
Is there any way to get the keys+values *?
| There's no command for that, but you can write a script to do so.
You will need to perform for each key a "type" command:
> type <key>
and depending on the response perform:
for "string": get <key>
for "hash": hgetall <key>
for "list": lrange <key> 0 -1
for "set": smembers <key>
for "zset": zrange <key> 0 -1 withscores
Keep in mind that for hashes and sorted sets you will be getting the keys/scores and values.
A possible sh implementation:
#!/bin/sh -eu
keys=`redis-cli keys '*'`
if [ "$keys" ]; then
echo "$keys" | while IFS= read -r key; do
type=`echo | redis-cli type "$key"`
case "$type" in
string) value=`echo | redis-cli get "$key"`;;
hash) value=`echo | redis-cli hgetall "$key"`;;
set) value=`echo | redis-cli smembers "$key"`;;
list) value=`echo | redis-cli lrange "$key" 0 -1`;;
zset) value=`echo | redis-cli zrange "$key" 0 -1 withscores`;;
esac
echo "> $key ($type):"
echo "$value" | sed -E 's/^/ /'
done
fi
But do note:
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout.
https://redis.io/commands/keys
| Redis | 8,078,018 | 153 |
My redis instance seems to being growing very large and I'd like to find out which of the multiple databases I have in there consumes how much memory. Redis' INFO command just shows me the total size and the number of keys per database which doesn't give me much insight... So any tools/ideas that give me more information when monitoring the redis server would be appreciated.
The Redis documentation doesn't show me any commands that can return the consumed memory of certain keys, so I guess if any buggy code would write a lot of "trash" to redis this could be really hard to find...
| So my solution to my own problem: After playing around with redis-cli a bit longer I found out that DEBUG OBJECT <key> reveals something like the serializedlength of key, which was in fact something I was looking for...
For a whole database you need to aggregate all values for KEYS * which shouldn't be too difficult with a scripting language of your choice...
The bad thing is that redis.io doesn't really have a lot of information about DEBUG OBJECT.
| Redis | 7,638,542 | 146 |
This might be easy question but I am having a hard time finding the answer. How does Redis 2.0 handle running out of maximum allocated memory? How does it decide which data to remove or which data to keep in memory?
| If you have virtual memory functionality turned on (EDIT: now deprecated), then Redis starts to store the "not-so-frequently-used" data to disk when memory runs out.
If virtual memory in Redis is disabled (the default) and the maxmemory parameter is set (the default), Redis will not use any more memory than maxmemory allows. If you turn maxmemory off, Redis will start using virtual memory (i.e. swap), and performance will drop tremendously.
Newer versions of Redis have various policies when maxmemory is reached:
volatile-lru - remove a key among the
ones with an expire set, trying to
remove keys not recently used.
volatile-ttl - remove a key among the
ones with an expire set, trying to
remove keys with short remaining time
to live.
volatile-random - remove a
random key among the ones with an
expire set.
allkeys-lru - like
volatile-lru, but will remove every
kind of key, both normal keys or keys
with an expire set.
allkeys-random -
like volatile-random, but will remove
every kind of keys, both normal keys
and keys with an expire set.
If you pick a policy that only removes keys with an EXPIRE set, then when Redis runs out of memory, it looks like the program just aborts the malloc() operation. That is, if you try to store more data, the write operation simply fails.
Some links for more info:
http://antirez.com/post/redis-as-LRU-cache.html
http://eli.thegreenplace.net/2009/10/30/handling-out-of-memory-conditions-in-c/
| Redis | 5,068,518 | 146 |
As you can see from the attached image, I've got a couple of workers that seem to be stuck. Those processes shouldn't take longer than a couple of seconds.
I'm not sure why they won't clear or how to manually remove them.
I'm on Heroku using Resque with Redis-to-Go and HireFire to automatically scale workers.
| None of these solutions worked for me, I would still see this in redis-web:
0 out of 10 Workers Working
Finally, this worked for me to clear all the workers:
Resque.workers.each {|w| w.unregister_worker}
| Redis | 7,416,318 | 143 |
I was wondering how to disable presistence in redis. There is mention of the possibility of doing this here: http://redis.io/topics/persistence. I mean it in the exact same sense as described there. Any help would be very much appreciated!
| To disable all data persistence in Redis do the following in the redis.conf file:
Disable AOF by setting the appendonly configuration directive to no (it is the default value). like this:
appendonly no
Disable RDB snapshotting by commenting all of the save configuration directives (there are 3 that are defined by default) and explicitly disabling saving:
#save 900 1
#save 300 10
#save 60 10000
save ""
After change, make sure you restart Redis to apply them.
Alternatively, you can use the CONFIG SET command to apply these changes during runtime (just make sure you also do a CONFIG REWRITE to persist the changes).
Note: depending on your Redis' version, there are other tweaks that prevent Redis from accessing the disk for replication-related tasks.
| Redis | 28,785,383 | 137 |
Currently I'm working on python project that requires implement some background jobs (mostly for email sending and heavily database updates). I use Redis for task broker. So in this point I have two candidates: Celery and RQ. I had some experience with these job queues, but I want to ask you guys to share you experience of using this tools. So.
What pros and cons to use Celery vs. RQ.
Any examples of projects/task suitable to use Celery vs. RQ.
Celery looks pretty complicated but it's full featured solution. Actually I don't think that I need all these features. From other side RQ is very simple (e.g configuration, integration), but it seems that it lacks some useful features (e.g task revoking, code auto-reloading)
| Here is what I have found while trying to answer this exact same question. It's probably not comprehensive, and may even be inaccurate on some points.
In short, RQ is designed to be simpler all around. Celery is designed to be more robust. They are both excellent.
Documentation. RQ's documentation is comprehensive without being complex, and mirrors the project's overall simplicity - you never feel lost or confused. Celery's documentation is also comprehensive, but expect to be re-visiting it quite a lot when you're first setting things up as there are too many options to internalize
Monitoring. Celery's Flower and the RQ dashboard are both very simple to setup and give you at least 90% of all information you would ever want
Broker support. Celery is the clear winner, RQ only supports Redis. This means less documentation on "what is a broker", but also means you cannot switch brokers in the future if Redis no longer works for you. For example, Instagram considered both Redis and RabbitMQ with Celery. This is important because different brokers have different guarantees e.g. Redis cannot (as of writing) guarantee 100% that your messages are delivered.
Priority queues. RQs priority queue model is simple and effective - workers read from queues in order. Celery requires spinning up multiple workers to consume from different queues. Both approaches work
OS Support. Celery is the clear winner here, as RQ only runs on systems that support fork e.g. Unix systems
Language support. RQ only supports Python, whereas Celery lets you send tasks from one language to a different language
API. Celery is extremely flexible (multiple result backends, nice config format, workflow canvas support) but naturally this power can be confusing. By contrast, the RQ api is simple.
Subtask support. Celery supports subtasks (e.g. creating new tasks from within existing tasks). I don't know if RQ does
Community and Stability. Celery is probably more established, but they are both active projects. As of writing, Celery has ~3500 stars on Github while RQ has ~2000 and both projects show active development
In my opinion, Celery is not as complex as its reputation might lead you to believe, but you will have to RTFM.
So, why would anyone be willing to trade the (arguably more full-featured) Celery for RQ? In my mind, it all comes down to the simplicity. By restricting itself to Redis+Unix, RQ provides simpler documentation, simpler codebase, and a simpler API. This means you (and potential contributors to your project) can focus on the code you care about, instead of having to keep details about the task queue system in your working memory. We all have a limit on how many details can be in our head at once, and by removing the need to keep task queue details in there RQ lets get back to the code you care about. That simplicity comes at the expense of features like inter-language task queues, wide OS support, 100% reliable message guarantees, and ability to switch message brokers easily.
| Redis | 13,440,875 | 134 |
# I have the dictionary my_dict
my_dict = {
'var1' : 5
'var2' : 9
}
r = redis.StrictRedis()
How would I store my_dict and retrieve it with redis. For example, the following code does not work.
#Code that doesn't work
r.set('this_dict', my_dict) # to store my_dict in this_dict
r.get('this_dict') # to retrieve my_dict
| You can do it by hmset (multiple keys can be set using hmset).
hmset("RedisKey", dictionaryToSet)
import redis
conn = redis.Redis('localhost')
user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
conn.hmset("pythonDict", user)
conn.hgetall("pythonDict")
{'Company': 'SCTL', 'Address': 'Mumbai', 'Location': 'RCP', 'Name': 'Pradeep'}
| Redis | 32,276,493 | 133 |
There is a post about a Redis command to get all available keys, but I would like to do it with Python.
Any way to do this?
| Use scan_iter()
scan_iter() is superior to keys() for large numbers of keys because it gives you an iterator you can use rather than trying to load all the keys into memory.
I had a 1B records in my redis and I could never get enough memory to return all the keys at once.
SCANNING KEYS ONE-BY-ONE
Here is a python snippet using scan_iter() to get all keys from the store matching a pattern and delete them one-by-one:
import redis
r = redis.StrictRedis(host='localhost', port=6379, db=0)
for key in r.scan_iter("user:*"):
# delete the key
r.delete(key)
SCANNING IN BATCHES
If you have a very large list of keys to scan - for example, larger than >100k keys - it will be more efficient to scan them in batches, like this:
import redis
from itertools import izip_longest
r = redis.StrictRedis(host='localhost', port=6379, db=0)
# iterate a list in batches of size n
def batcher(iterable, n):
args = [iter(iterable)] * n
return izip_longest(*args)
# in batches of 500 delete keys matching user:*
for keybatch in batcher(r.scan_iter('user:*'),500):
r.delete(*keybatch)
I benchmarked this script and found that using a batch size of 500 was 5 times faster than scanning keys one-by-one. I tested different batch sizes (3,50,500,1000,5000) and found that a batch size of 500 seems to be optimal.
Note that whether you use the scan_iter() or keys() method, the operation is not atomic and could fail part way through.
DEFINITELY AVOID USING XARGS ON THE COMMAND-LINE
I do not recommend this example I found repeated elsewhere. It will fail for unicode keys and is incredibly slow for even moderate numbers of keys:
redis-cli --raw keys "user:*"| xargs redis-cli del
In this example xargs creates a new redis-cli process for every key! that's bad.
I benchmarked this approach to be 4 times slower than the first python example where it deleted every key one-by-one and 20 times slower than deleting in batches of 500.
| Redis | 22,255,589 | 133 |
Are there any good browsers/explorer for viewing Redis out there ?
Am new to Redis so my expectation is if there is something similar to MongoVUE,Toad or SQLExplorer.
I tried Redis Admin UI from service stack but ran into 500 error when trying on IIS
| Redis Commander is great if you're using node.js already.
Super simple to get going with NPM:
npm install -g redis-commander
redis-commander
Then point your browser at the address in the console
| Redis | 12,292,351 | 131 |
I've just install Redis succesfully using the instructions on the Quick Start guide on http://redis.io/topics/quickstart on my Ubuntu 10.10 server. I'm running the service as dameon (so it can be run by init.d)
The server is part of Rackspace Cluster with Internal and External IPs. The host is running on port 6379 (standard for Redis)
I've added a row in the iptables to allow incoming connections from port 6379 as shown below:
ACCEPT tcp -- anywhere anywhere tcp dpt:6379
In my PHP code on another server, I'm trying to connect to the new Redis server here:
$this->load->helper("iredis");
$hostname = "IP ADDRESS HERE";
$redis = new iRedis(array('hostname' => $hostname, 'port' => 6379));
Once I do this - I always get a connection refused. In my redis.conf file, I have the local bind command commented out, so it should be listening on more than the localhost IP. I can connect to the database on the local machine just not on another server. I've tried the external and internal IPs with no luck.
Any suggestions on getting this to work?
| I've been stuck with the same issue, and the preceding answer did not help me (albeit well written).
The solution is here : check your /etc/redis/redis.conf, and make sure to change the default
bind 127.0.0.1
to
bind 0.0.0.0
Then restart your service (service redis-server restart)
You can then now check that redis is listening on non-local interface with
redis-cli -h 192.168.x.x ping
(replace 192.168.x.x with your IP adress)
Important note : as several users stated, it is not safe to set this on a server which is exposed to the Internet. You should be certain that you redis is protected with any means that fits your needs.
| Redis | 8,537,254 | 130 |
We are defining an architecture to collect log information by Logstash shippers which are installed in various machines and index the data in one elasticsearch server centrally and use Kibana as the graphical layer. We need a reliable messaging system in between Logstash shippers and elasticsearch to grantee the delivery. What factors should be considered when selecting Redis over RabbitMQ as a data broker/messaging system in between Logstash shippers and the elasticsearch or vice versa?
| After evaluating both Redis and RabbitMQ I chose RabbitMQ as our broker for the following reasons:
RabbitMQ allows you to use a built in layer of security by using SSL certificates to encrypt the data that you are sending to the broker and it means that no one will sniff your data and have access to your vital organizational data.
RabbitMQ is a very stable product that can handle large amounts of events per seconds and many connections without being the bottle neck.
Regarding scaling, RabbitMQ has a built in cluster implementation that you can use in addition to a load balancer in order to implement a redundant broker environment.
Is my RabbitMQ cluster Active Active or Active Passive?
Now to the weaker point of using RabbitMQ:
most Logstash shippers do not support RabbitMQ but on the other hand, the best one, named Beaver, has an implementation that will send data to RabbitMQ without a problem.
The implementation that Beaver has with RabbitMQ in its current version is a little slow on performance (for my purposes) and was not able to handle the rate of 3000 events/sec from one server and from time to time the service crashed.
Right now I am working on a fix that will solve the performance problem for RabbitMQ and make the Beaver shipper more stable. The first solution is to add more processes that can run simultaneously and will give the shipper more power. The second solution is to change Beaver to send data to RabbitMQ asynchronously which theoretically should be much faster. I hope that I’ll finish implementing both solutions by the end of this week.
You can follow the issue here:
https://github.com/josegonzalez/python-beaver/issues/323
And check the pull request here:
https://github.com/josegonzalez/python-beaver/pull/324
If you have more questions feel free to leave a comment.
| Redis | 29,539,443 | 124 |
I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors.
Currently when trying from my local set up:
redis-cli -h my-node-endpoint -p 6379
I only get a timeout after some time.
| SSH port forwarding should do the trick. Try running this from you client.
ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis>
Then from your client
redis-cli -h 127.0.0.1 -p 6379
Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group.
Also, AWS now supports accessing your cluster more info here
Update 04/13/2024:
Many folks are running Kubernetes today. It's a very typical scenario for folks to have services running in Kubernetes accessing ElasticCache Redis.
So there is a way to do this (test your redis connection locally through Kubernetes) using the kubectl ssh jump plugin.
Follow the installation instructions. Then see case 2 here.
For example:
kubectl ssh-jump sshjump \
-i ~/.ssh/id_rsa_k8s -p ~/.ssh/id_rsa_k8s.pub \
-a "-L 6379:<your redis node endpoint>:6379"
and then from your client:
redis-cli -h 127.0.0.1 -p 6379
| Redis | 21,917,661 | 121 |
I know there are three different, popular types of non-sql databases.
Key/Value: Redis, Tokyo Cabinet, Memcached
ColumnFamily: Cassandra, HBase
Document: MongoDB, CouchDB
I have read long blogs about it without understanding so much.
I know relational databases and get the hang around document-based databases like MongoDB/CouchDB.
Could someone tell me what the major differences are between these and the 2 former on the list?
| The main differences are the data model and the querying capabilities.
Key-value stores
The first type is very simple and probably doesn't need any further explanation.
Data model: more than key-value stores
Although there is some debate on the correct name for databases such as Cassandra, I'd like to call them column-family stores. Although key-value pairs are an essential part of Cassandra, it's not limited to just that. It allows you to nest key-value pairs, so a key could refer to multiple sub-key-value pairs.
You cannot nest key-value pairs indefinitely though. You are limited to three levels (column families) or four levels of nesting (super-column families). In case the term column family doesn't ring a bell, see the WTF is a SuperColumn article, it's a good explanation of Cassandra's data model.
Document databases, such as CouchDB and MongoDB store entire documents in the form of JSON objects. You can think of these objects as nested key-value pairs. Unlike Cassandra, you can nest key-value pairs as much as you want. JSON also supports arrays and understands different data types, such as strings, numbers and boolean values.
Querying
I believe column-family stores can only be queried by key, or by writing map-reduce functions. You cannot query the values like you would in an SQL database. If your application needs more complex queries, your application will have to create and maintain indexes in order to access the desired data.
Document databases support queries by key and map-reduce functions as well, but also allow you to do basic queries by value, such as "Give me all users with more than 10 posts". Document databases are more flexible in this way.
| Redis | 3,554,169 | 121 |
I'm currently using MySql to store my sessions. It works great, but it is a bit slow.
I've been asked to use Redis, but I'm wondering if it is a good idea because I've heard that Redis delays write operations. I'm a bit afraid because sessions need to be real-time.
Has anyone experienced such problems?
| Redis is perfect for storing sessions. All operations are performed in memory, and so reads and writes will be fast.
The second aspect is persistence of session state. Redis gives you a lot of flexibility in how you want to persist session state to your hard-disk. You can go through http://redis.io/topics/persistence to learn more, but at a high level, here are your options -
If you cannot afford losing any sessions, set appendfsync always in your configuration file. With this, Redis guarantees that any write operations are saved to the disk. The disadvantage is that write operations will be slower.
If you are okay with losing about 1s worth of data, use appendfsync everysec. This will give great performance with reasonable data guarantees
| Redis | 10,278,683 | 117 |
I've heard of redis-cache but how exactly does it work? Is it used as a layer between django and my rdbms, by caching the rdbms queries somehow?
Or is it supposed to be used directly as the database? Which I doubt, since that github page doesn't cover any login details, no setup.. just tells you to set some config property.
| This Python module for Redis has a clear usage example in the readme: http://github.com/andymccurdy/redis-py
Redis is designed to be a RAM cache. It supports basic GET and SET of keys plus the storing of collections such as dictionaries. You can cache RDBMS queries by storing their output in Redis. The goal would be to speed up your Django site. Don't start using Redis or any other cache until you need the speed - don't prematurely optimize.
| Redis | 3,801,379 | 112 |
I want to build my PHP-FPM image with php-redis extension based on the official PHP Docker image, for example, using this Dockerfile: php:5.6-fpm.
The docs say that I can install extensions this way, installing dependencies for extensions manually:
FROM php:5.6-fpm
# Install modules (iconv, mcrypt and gd extensions)
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
&& docker-php-ext-install iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install gd
CMD ["php-fpm"]
Without Docker, I installed it with apt-get install php5-redis. But how can I install it using the approach above?
| Redis is not an extension that is included in “php-src”, therefore you cannot use docker-php-ext-install. Use PECL:
RUN pecl install --onlyreqdep --force redis \
&& rm -rf /tmp/pear \
&& docker-php-ext-enable redis
On alpine php 7.3.5 we can use:
RUN apk add --no-cache pcre-dev $PHPIZE_DEPS \
&& pecl install redis \
&& docker-php-ext-enable redis.so
| Redis | 31,369,867 | 111 |
I want to remove keys that match "user*".
How do I do that in redis command line?
| Another compact one-liner I use to do what you want is:
redis-cli KEYS "user*" | xargs redis-cli DEL
| Redis | 8,799,063 | 111 |
I was wondering what characters are considered valid in a Redis key. I have googled for some time and can not find any useful info.
Like in Python, valid variable name should belong to the class [a-zA-Z0-9_]. What are the requirements and conventions for Redis keys?
| Part of this is answered here, but this isn't completely a duplicate, as you're asking about allowed characters as well as conventions.
As for valid characters in Redis keys, the manual explains this completely:
Redis keys are binary safe, this means that you can use any binary sequence as a key, from a string like "foo" to the content of a JPEG file. The empty string is also a valid key.
A few other rules about keys:
Very long keys are not a good idea, for instance a key of 1024 bytes is a bad idea not only memory-wise, but also because the lookup of the key in the dataset may require several costly key-comparisons. Even when the task at hand is to match the existence of a large value, to resort to hashing it (for example with SHA1) is a better idea, especially from the point of view of memory and bandwidth.
Very short keys are often not a good idea. There is little point in writing "u1000flw" as a key if you can instead write "user:1000:followers". The latter is more readable and the added space is minor compared to the space used by the key object itself and the value object. While short keys will obviously consume a bit less memory, your job is to find the right balance.
Try to stick with a schema. For instance "object-type:id" is a good idea, as in "user:1000". Dots or dashes are often used for multi-word fields, as in "comment:1234:reply.to" or "comment:1234:reply-to".
The maximum allowed key size is 512 MB.
| Redis | 30,271,808 | 110 |
I would like to remove the debugging mode. I am using express, redis, socket.io and connect-redis, but I do not know where the debugging mode comes from.
Someone has an idea?
| Update
To completely remove debugging use:
var io = require('socket.io').listen(app, { log: false });
Where app is node.js http server / express etc.
You forgot to mention you are also using socket.io. This is coming from socket.io. You can disable this by configuration:
io.set('log level', 1); // reduce logging
| Redis | 6,807,775 | 110 |
How does Redis implement the expiration of keys? From here I learnt that Redis stores the time at which the key will expire, but how exactly is this implemented?
| In short - for each redis object, there is an expiration time. Unless you set the object to expire, that time is "never".
Now, the expiration mechanism itself is semi-lazy. Lazy expiration means that you don't actually expire the objects until they are read. When reading an object, we check its expiration timestamp, and if it's in the past, we return nothing, and delete the object while we're at it. But the problem is that if a key is never touched, it just takes up memory for no reason.
So Redis adds a second layer of random active expiration. It just reads random keys all the time, and when an expired key is touched it is deleted based on the lazy mechanism. This does not affect the expire behavior, it just adds "garbage collection" of expired keys.
Of course the actual implementation is more complicated than this, but this is the main idea.
You can read more about it here: http://redis.io/commands/expire
And the source code for the active expiration cycle can be found here: https://github.com/antirez/redis/blob/a92921da135e38eedd89138e15fe9fd1ffdd9b48/src/expire.c#L98
| Redis | 36,172,745 | 107 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.