question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I am doing a research regarding migrating from MySQL to MariaDB and viceversa.
As it seems, the first part should be easy and not complicated as stated here: MariaDB versus MySQL - Compatibility
It is the "viceversa" part that botters me: MariaDB -> MySQL. I just could not find anything useful about this topic.
Can anyone help? Any advice, hint? Are there complications migrating from MariaDB back to MySQL?
Thank you so much.
| If there is someone who should ever take care of migrating from MariaDB to MySQL, it would be Oracle. As Oracle pretends MariaDB does not exist (company politics-policies) , it also does not provide the possibility to migrate (which does not bother me personally, because being on MariaDB team I'd prefer people to chose it and stay there :)
Having said all the above, the migration should be straightforward, provided you do not use special features (so, no storage engines besides MyISAM/Innodb). You shutdown MariaDB, you save the data directory, remove MariaDB, install MySQL, copy the saved data directory back. In the worst case you'd need to remove parameters from my.ini/my.cnf that are not recognized by MySQL.
The point I'm trying to make is since persistent data formats for most commonly used storage engines are compatible, no data migration is required (at least not yet:)
| MariaDB | 8,224,145 | 11 |
After do a images rm, volumes rm and docker prune i got this error starting mariaDB on with docker-compose
This is the db compose part (yml):
mariaDB:
image: 'mariadb:latest'
environment:
- MYSQL_ROOT_PASSWORD=root_password
The error log:
mariaDB_1 | 2022-05-27 20:14:42+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.8.3+maria~jammy started.
mariaDB_1 | 2022-05-27 20:14:42+00:00 [ERROR] [Entrypoint]:
mariadbd failed while attempting to check config
mariaDB_1 | command was: mariadbd --verbose --help --log-bin-index=/tmp/tmp.JMRNT5ajM6
mariaDB_1 | Can't initialize timers
services_mariaDB_1 exited with code 1
Thanks in advance!
| Same problem here with latest docker container of MariaDB and the latest tag. Pinning to 10.8.2 (mariadb:10.8.2) in docker-compose fixes this issue.db:
This is my new image line and with 10.8.2 it keeps working. In the mariadb issue tracker is already a discussion going so they are working on this.
image: mariadb:10.8.2
| MariaDB | 72,410,663 | 10 |
I'm learning SQLAlchemy right now, but I've encountered an error that puzzles me. Yes, there are similar questions here on SO already, but none of them seem to be solved.
My goal is to use the ORM mode to query the database. So I create a model:
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import Session, registry
from sqlalchemy.sql import select
database_url = "mysql+pymysql://..."
mapper_registry = registry()
Base = mapper_registry.generate_base()
class User(Base):
__tablename__ = "user"
id = Column(Integer, primary_key=True)
name = Column(String(32))
engine = create_engine(database_url, echo=True)
mapper_registry.metadata.create_all(engine)
New I want to load the whole row for all entries in the table:
with Session(engine) as session:
for row in session.execute(select(User)):
print(row.name)
#- Error: #
Traceback (most recent call last):
...
print(row.name)
AttributeError: Could not locate column in row for column 'name'
What am I doing wrong here? Shouldn't I be able to access the fields of the ORM model? Or am I misunderstanding the idea of ORM?
I'm using Python 3.8 with PyMySQL 1.0.2 and SQLAlchemy 1.4.15 and the server runs MariaDB.
This is example is as minimal as I could make it, I hope anyone can point me in the right direction. Interestingly, inserting new rows works like a charm.
| session.execute(select(User)) will return a list of Row instances (tuples), which you need to unpack:
for row in session.execute(select(Object)):
# print(row[0].name) # or
print(row["Object"].name)
But I would use the .query which returns instances of Object directly:
for row in session.query(Object):
print(row.name)
| MariaDB | 67,623,570 | 10 |
Environment: centos7 + mariadb5.5.64.
Let me show the installation info on screen when to run mysql_secure_installation.
# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
I write an automation expect script to install mariadb.
vim secure.exp
set timeout 60
spawn mysql_secure_installation
expect {
"Enter current password for root (enter for none): " {send "\r";exp_continue}
"Set root password? [Y/n] " {send "y\r";exp_continue}
"New password:" {send "123456\r";exp_continue}
"Re-enter new password:" {send "123456\r";exp_continue}
"Remove anonymous users? [Y/n]" {send "y\r";exp_continue}
"Disallow root login remotely? [Y/n]" {send "y\r";exp_continue}
"Remove test database and access to it? [Y/n]" {send "y\r";exp_continue}
"Reload privilege tables now? [Y/n]" {send "y\r";exp_continue}
}
To execute /usr/bin/expect secure.exp, i come across the error:
spawn mysql_secure_installation
invalid command name "Y/n"
while executing
"Y/n"
invoked from within
"expect {
"Enter current password for root (enter for none): " {send "\r";exp_continue}
"Set root password? [Y/n] " {send "y\r";exp..."
(file "secure.exp" line 3)
It is no use to write as below:
set timeout 60
spawn mysql_secure_installation
expect {
"Enter current password for root (enter for none): " {send "\r";exp_continue}
"Set root password? \\[Y/n] " {send "y\r";exp_continue}
"New password:" {send "123456\r";exp_continue}
"Re-enter new password:" {send "123456\r";exp_continue}
"Remove anonymous users? \\[Y/n]" {send "y\r";exp_continue}
"Disallow root login remotely? \\[Y/n]" {send "y\r";exp_continue}
"Remove test database and access to it? \\[Y/n]" {send "y\r";exp_continue}
"Reload privilege tables now? \\[Y/n]" {send "y\r";exp_continue}
}
Same error:
invalid command name "Y/n"
while executing
"Y/n"
invoked from within
"expect {
"Enter current password for root (enter for none): " {send "\r";exp_continue}
"Set root password? \\[Y/n] " {send "y\r";exp_conti..."
(file "secure.exp" line 3)
How to fix my exp script then?
| These scripts wait to receive optional output (timeout -1 means "no timeout") and they can tell apart different responses, as it is required by yum install and mysql_secure_installation. With #!/bin/expect -f as shebang, the scripts can be executed, when they were set to chmod +x.
A) To begin with, mariadb_yum.exp (requires su or sudo):
#!/bin/expect -f
set timeout 30
if {[llength $argv] == 0} {
send_user "Usage: mariadb_yum.exp \[linux sudo password\]\n"
exit 1
}
set USERNAME "[exec whoami]"
set PASSWORD [lindex $argv 0];
# optionally, redirect output to log file (silent install)
# log_user 0
# log_file -a "/home/$USERNAME/mariadb_install.log"
spawn sudo yum -y install MariaDB-server
set yum_spawn_id $spawn_id
# On GCE it will never ask for a sudo password:
expect -ex "\[sudo\] password for $USERNAME: " {
exp_send "$PASSWORD\r"
}
expect {
# when the package was already installed
-ex "Nothing to do" {
send_user "package was already installed\n"
}
# when the package had been installed
-ex "Complete!" {
send_user "package had been installed\n"
}
}
expect eof
close $yum_spawn_id
exit 0
B) And then mariadb_sec.exp (doesn't require sudo):
#!/bin/expect -f
set timeout 1
if {[llength $argv] == 0} {
send_user "Usage: mariadb_sec.exp \[mysql root password\]\n"
exit 1
}
set PASSWORD [lindex $argv 0];
spawn mysql_secure_installation
set mysql_spawn_id $spawn_id
# optionally, redirect output to log file (silent install)
# log_user 0
# log_file -a "/home/[exec whoami]/mariadb_install.log"
# when there is no password set, this probably should be "\r"
expect -ex "Enter current password for root (enter for none): "
exp_send "$PASSWORD\r"
expect {
# await an eventual error message
-ex "ERROR 1045" {
send_user "\nMariaDB > An invalid root password had been provided.\n"
close $mysql_spawn_id
exit 1
}
# when there is a root password set
-ex "Change the root password? \[Y/n\] " {
exp_send "n\r"
}
# when there is no root password set (could not test this branch).
-ex "Set root password? \[Y/n\] " {
exp_send "Y\r"
expect -ex "New password: "
exp_send "$PASSWORD\r"
expect -ex "Re-enter new password: "
exp_send "$PASSWORD\r"
}
}
expect -ex "Remove anonymous users? \[Y/n\] "
exp_send "Y\r"
expect -ex "Disallow root login remotely? \[Y/n\] "
exp_send "Y\r"
expect -ex "Remove test database and access to it? \[Y/n\] "
exp_send "Y\r"
expect -ex "Reload privilege tables now? \[Y/n\] "
exp_send "Y\r"
expect eof
close $mysql_spawn_id
exit 0
For debugging purposes - or to validate the answer, one can run expect with log-level strace 4. This is probably as reputable as a source can get, when it comes to writing expect scripts, as it nicely displays what is going on and most importantly, in which order the things happen:
expect -c "strace 4" ./mariadb_yum.exp [linux sudo password]
expect -c "strace 4" ./mariadb_sec.exp [mysql root password]
Instruction set exp_internal 1 can be used to get output for the regex-matching.
A possible source of confusion might be, where one spawns the processes - as one can spawn several process on various hosts, eg. ssh locally and then yum and mysql_secure_installation remotely. Added $spawn_id to the script; the bottom one close call might be redundant, since it is already EOF (just to show how to spawn & close processes):
Thanks for using MariaDB!
1 close $mysql_spawn_id
1 exit 0
2 rename _close.pre_expect close
Conclusion: The mariadb_sec.exp script probably could be improved further, eg. when at first sending no password and seeing what happens - then sending the password on ERROR 1045 (when a password had already been set previously). It may be save to assume, that one has to set the password when the server just has been installed (except that yum reinstall delivers the same result). Just had no blank CentOS container to test all the cases. Unless running in a root shell, passing both kinds of passwords into one script would be required to automate this from installation until post-installation.
Probably worth noting is that on GCE, sudo would not ask for a password; there are indeed minor differences based upon the environment, as these CentOS container images behave differently. In such case (since there is no su or container-image detection in place), the mariadb_yum.exp script might get stuck for 30 seconds and then continue.
The most reputable sources I can offer are the expect manual, written by Don Libes @ NIST and the TCL/TK manual for expect, along with it's SourceForge project coincidentally called expect.
| MariaDB | 60,097,125 | 10 |
I've created a docker container containing an instance of mariadb, but i cannot access to the database from my phisical machine:
I've got the ip address from docker inspect and the port from docker ps but Sequel Pro gave me the connection failed message (same thing with Visual Studio Code). Obviously from inside the docker container I can connect myself to the database engine.
Where am i wrong? Thanks so much to everyone! :)
[EDIT] Thanks to all comments...
if I try to expose the port, the container doesn't run :/
| It's worked for me:
Create a new mariadb container
docker container run \
--name sql-maria \
-e MYSQL_ROOT_PASSWORD=12345 \
-e MYSQL_USER=username \
-e MYSQL_PASSWORD=12345 \
-e MYSQL_DATABASE=dbname \
-p 3306:3306 \
-d mariadb:10
Watch the logs and wait for mariadb server is up
docker container logs -f sql-maria
The tail of the log should look something like this
2020-02-04 20:02:44 0 [Note] mysqld: ready for connections.
Use a client of your choice to connect to mariadb. I'm using mysql client here
mysql -h 127.0.0.1 -p -u username dbname
If you are on a unix-based system it is mandatory to use the loopback address 127.0.0.1 instead of localhost
| MariaDB | 59,591,620 | 10 |
I'm using Spring data jpa and mariadb latest version, and MariaDB 10.3.16
+--- org.springframework.boot:spring-boot-starter-data-jpa -> 2.1.5.RELEASE
...
| +--- org.springframework.boot:spring-boot-starter-jdbc:2.1.5.RELEASE
...
| +--- org.hibernate:hibernate-core:5.3.10.Final
This is my Entity:
@Entity
@Data
@Table
@NoArgsConstructor
@AllArgsConstructor
public class Note {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE)
private Integer id;
@Column
private String gsn;
@Column
@Enumerated(EnumType.STRING)
private NoteType type;
@Column
private String text;
@Column
private ZonedDateTime scheduleDt;
@Column
@CreationTimestamp
private Instant createDt;
@Column
@UpdateTimestamp
private ZonedDateTime updateDt;
}
When I persist my entity, Hibernate tries to save ZonedDateTime member as DATETIME column. But I want to use TIMESTAMP column instead of DATETIME column.
This is create DDL, what I see from log.
create table `note` (`id` integer not null, `create_dt` datetime,
`gsn` varchar(255), `schedule_dt` datetime, `text` varchar(255),
`type` varchar(255), `update_dt` datetime, primary key (`id`))
engine=MyISAM
Here create_dt, schedule_dt, update_dt is created as datetime column type, what is not I wanted. (I don't like MyISAM, too).
How can I fix it?
Added because comment cannot express ddl.
When I use columnDefinition attribute, generated ddl is ...
create table `note` (`id` integer not null, `create_dt` datetime,
`gsn` varchar(255), `schedule_dt` datetime, `text` varchar(255),
`type` varchar(255), `update_dt` `TIMESTAMP`, primary key (`id`))
engine=MyISAM
There is unrequired '`' around TIMESTAMP.
| JPA 2.2 offers support for mapping Java 8 Date/Time API, but only for the following types:
java.time.LocalDate
java.time.LocalTime
java.time.LocalDateTime
java.time.OffsetTime
java.time.OffsetDateTime
However, Hibernate supports also ZonedDateTime, like this.
When saving the ZonedDateTime, the following Timestamp is going to be sent to the PreparedStatement:
Timestamp.from( zonedDateTime.toInstant() )
And, when reading it from the database, the ResultSet will contain a Timestamp that will be transformed to a ZonedDateTime, like this:
ts.toInstant().atZone( ZoneId.systemDefault() )
Note that the ZoneId is not stored in the database, so basically, you are probably better off using a LocalDateTime if this Timestamp conversion is not suitable for you.
So, let's assume we have the following entity:
@Entity(name = "UserAccount")
@Table(name = "user_account")
public class UserAccount {
@Id
private Long id;
@Column(name = "first_name", length = 50)
private String firstName;
@Column(name = "last_name", length = 50)
private String lastName;
@Column(name = "subscribed_on")
private ZonedDateTime subscribedOn;
//Getters and setters omitted for brevity
}
Notice that the subscribedOn attribute is a ZonedDateTime Java object.
When persisting the UserAccount:
UserAccount user = new UserAccount()
.setId(1L)
.setFirstName("Vlad")
.setLastName("Mihalcea")
.setSubscribedOn(
LocalDateTime.of(
2020, 5, 1,
12, 30, 0
).atZone(ZoneId.systemDefault())
);
entityManager.persist(user);
Hibernate generates the proper SQL INSERT statement:
INSERT INTO user_account (
first_name,
last_name,
subscribed_on,
id
)
VALUES (
'Vlad',
'Mihalcea',
'2020-05-01 12:30:00.0',
1
)
When fetching the UserAccount entity, we can see that the ZonedDateTime is properly fetched from the database:
UserAccount userAccount = entityManager.find(
UserAccount.class, 1L
);
assertEquals(
LocalDateTime.of(
2020, 5, 1,
12, 30, 0
).atZone(ZoneId.systemDefault()),
userAccount.getSubscribedOn()
);
| MariaDB | 56,647,962 | 10 |
We are saving information in a json Column which contain json data in an array.
Data structure:
[
{
"type":"automated_backfill",
"title":"Walgreens Sales Ad",
"keyword":"Walgreens Sales Ad",
"score":4
},
{
"type":"automated_backfill",
"title":"Nicoderm Coupons",
"keyword":"Nicoderm Coupons",
"score":4
},
{
"type":"automated_backfill",
"title":"Iphone Sales",
"keyword":"Iphone Sales",
"score":3
},
{
"type":"automated_backfill",
"title":"Best Top Load Washers",
"keyword":"Best Top Load Washers",
"score":1
},
{
"type":"automated_backfill",
"title":"Top 10 Best Cell Phones",
"keyword":"Top 10 Best Cell Phones",
"score":1
},
{
"type":"automated_backfill",
"title":"Tv Deals",
"keyword":"Tv Deals",
"score":0
}
]
What we are trying:
SELECT id, ad_meta->'$**.type' FROM window_requests
that returns:
We are looking to get each type as row, which i think only possible with stored procedure, return whole column and then run loop on each row and return data...
Or can you think of a better solution?
Either Update Architecture:
or should we change our database and save information in separate table instead to json column ?
And then we can get easily join to get data with adding a foreign key.
Thanks you.
| I understand that you are trying to generate a table structure from the content of your JSON array.
You would need to proceed in two steps :
first, turn each element in the array into a record ; for this, you can generate an inline table of of numbers and use JSON_EXTRACT() to pull up the relevant JSON object.
then, extract the values of each attribute from each object, generating new columns ; the -> operator can be used for this.
Query :
SELECT
id,
rec->'$.type' type,
rec->'$.score' score,
rec->'$.title' title,
rec->'$.keyword' keyword
FROM (
SELECT t.id, JSON_EXTRACT(t.val, CONCAT('$[', x.idx, ']')) AS rec
FROM
mytable t
INNER JOIN (
SELECT 0 AS idx UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9
) AS x ON JSON_EXTRACT(t.val, CONCAT('$[', x.idx, ']')) IS NOT NULL
) z
This will handle up to 10 objects per JSON array (if you expect more than that, you can add expand the UNION ALL part of the query).
In this DB Fiddle with your test data, this yields :
| id | type | score | title | keyword |
| --- | -------------------- | ----- | ------------------------- | ------------------------- |
| 1 | "automated_backfill" | 4 | "Walgreens Sales Ad" | "Walgreens Sales Ad" |
| 1 | "automated_backfill" | 4 | "Nicoderm Coupons" | "Nicoderm Coupons" |
| 1 | "automated_backfill" | 3 | "Iphone Sales" | "Iphone Sales" |
| 1 | "automated_backfill" | 1 | "Best Top Load Washers" | "Best Top Load Washers" |
| 1 | "automated_backfill" | 1 | "Top 10 Best Cell Phones" | "Top 10 Best Cell Phones" |
| 1 | "automated_backfill" | 0 | "Tv Deals" | "Tv Deals" |
NB : the arrow operator is not available in MariaDB. You can use JSON_EXTRACT() instead, like :
SELECT
id,
JSON_EXTRACT(rec, '$.type') type,
JSON_EXTRACT(rec, '$.score') score,
JSON_EXTRACT(rec, '$.title') title,
JSON_EXTRACT(rec, '$.keyword') keyword
FROM
...
| MariaDB | 54,617,165 | 10 |
I have not used MySQL in a few years and when I created a new table it did something I was not expecting. I am using MariaDB v5.5.60-MariaDB
I need to create a table that has both a created column and an updated column.
I need the created column to only be set to CURRENT_TIMESTAMP when the row is created and then never change unless I change it explicitly.
I need the updated column to be set to CURRENT_TIMESTAMP both when the row is created and when the row is changed.
If I do the following:
CREATE TABLE user_prefs (
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT UNIQUE,
user VARCHAR(255) NOT NULL,
provider VARCHAR(255) NOT NULL,
pref VARCHAR(128) NOT NULL,
jsondata LONGTEXT,
created timestamp NOT NULL,
modified timestamp NOT NULL,
PRIMARY KEY (id),
UNIQUE INDEX id_UNIQUE (id ASC));
Then the created column is set to:
DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
and the modified column is set to:
DEFAULT '0000-00-00 00:00:00'
If I try this:
CREATE TABLE user_prefs (
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT UNIQUE,
user VARCHAR(255) NOT NULL,
provider VARCHAR(255) NOT NULL,
pref VARCHAR(128) NOT NULL,
jsondata LONGTEXT,
created timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
modified timestamp NOT NULL ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (id),
UNIQUE INDEX id_UNIQUE (id ASC));
Then I get the error **Error Code: 1293. Incorrect table definition; there can be only one TIMESTAMP column with CURRENT_TIMESTAMP in DEFAULT or ON UPDATE clause
**
So is there a way to automate setting both created and modified on creation of a row and then to change modified every time the row is change?
Thanks in advance.
| A table might have automatic initialization of date in only one column in old versions of MySQL. But its behavior fixed in version 5.6.5.
It means you have several ways to avoid this error:
1.You can upgrade your MySQL to the latest version;
Advantages:
native clear implementation of modification dates management in a database side
there aren't excess triggers
Вrawback:
if the current version of MySQL is used in exists projects then upgrading might make some problems.
2.You can create triggers for updating and the creation of a record, as @Simonare said
Advantages:
implementation of modification dates management in a database side
Вrawback:
there are many excess triggers. You'll create two triggers for each table. It means you'll create N*2 triggers for N tables.
3.You can set default value of created column to 0000-00-00 00:00:00 and set default value of updated column to CURRENT_TIMESTAMP(). In this case date of updating will be generated automatically. Also if you write null to created column MySQL will generate current date automatically and set it to the column. For example:
CREATE TABLE example_table (
created TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',
updated_at TIMESTAMP NOT NULL ON UPDATE CURRENT_TIMESTAMP
);
If you execute the following query:
INSERT INTO example_table (created) VALUES (null);
created column will have current date value. MySQL will fill it automatically.
Advantages:
there aren't excess triggers
Вrawback:
implementation of modification dates management in a database side and client application side
4.You can use automatic initialization of date in updated column and use trigger to fill created column. For example:
CREATE TABLE example_table (
created TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',
updated_at TIMESTAMP NOT NULL ON UPDATE CURRENT_TIMESTAMP
);
DELIMITER //
CREATE TRIGGER example_table_set_created_date
BEFORE INSERT
ON example_table FOR EACH ROW
BEGIN
SET NEW.created = CURRENT_TIMESTAMP();
END; //
DELIMITER;
Advantages:
implementation of modification dates management in a database side
Вrawback:
there are many excess triggers. You'll create N triggers for N tables.
| MariaDB | 54,350,419 | 10 |
I have installed WAMPServer 3.1.7 in my server and working fine. But due to environment restriction I have to remove the database from it. I have both mysql and mariadb installed in server.
How can I remove these databases from WAMPServer?
| You can deactivate both MySQL and/or mariaDB in WAMPServer if you want to. You simply use the wampmanager icon in the system tray.
Start WAMPServer, then
RIGHT CLICK the wampmanager -> Wamp Settings
You should see something like the below image.
NOTE In the image I have already deactivated mariaDB
Where you see Allow MySQL and Allow mariaDB they will be ticked with a green tick.
Click one of these and that database will be removed. Allow a few seconds for WAMPServer to complete the task and restart itself, then click the other database and allow that to complete and restart WAMPServer.
When you go back to this menu the green ticks should have been removed.
| MariaDB | 54,192,715 | 10 |
I'm trying to move from from using VirtualBox as my development environment to docker.
With VirtualBox, I mainly install PHP-FPM, Nginx and Mariadb but in Docker, I can't replicate the same stack despite trying for some days.
Out of all the LEMP/LAMP stack docker guides, only this one chentex/docker-nginx-centos works for me:
Here is the code from the Dockerfile
FROM centos:centos7
LABEL maintainer="Vicente Zepeda <[email protected]>"
ENV nginxversion="1.12.2-1" \
os="centos" \
osversion="7" \
elversion="7_4"
RUN yum install -y wget openssl sed &&\
yum -y autoremove &&\
yum clean all &&\
wget http://nginx.org/packages/$os/$osversion/x86_64/RPMS/nginx-$nginxversion.el$elversion.ngx.x86_64.rpm &&\
rpm -iv nginx-$nginxversion.el$elversion.ngx.x86_64.rpm &&\
sed -i '1i\
daemon off;\
' /etc/nginx/nginx.conf
CMD ["nginx"]
This works right out of the box, and I can see a default page on http://localhost
The only problem is that, it does not contain PHP-FPM and Mariadb.
I tried to alter the file and add PHP-FPM and Mariadb, but I found out on reddit that each container should have on service, as in one container for nginx, and another for php ... and I'm lost on how to make that
| Docker containers are designed to have a single service running within them, not be an entire virtual system (as you might see with virtual box and virtual machines).
This means ideally you want a single container for each:
Nginx
PHP
Mariadb
Additionally the Centos docker image is designed as a base for others to inherit from, or to perform an OS specific task (for example cURL calls, or a shell) which is not really what you are needing.
I would recommend for your case, using docker-compose which will allow you to easily setup intermediate containers, and manage them all as one project.
I would recommend a docker-compose.yml file setup as such:
version: '3'
services:
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./src:/(nginx config root folder)
- ./config/site.conf:/etc/nginx/conf.d/site.conf
links:
- php
- mariadb
php:
image: php:7-fpm
mariadb:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
You would then have a /config/ folder in your project folder, which you'll need a site.conf file for the nginx settings.
You will also need a /src/ folder in your project folder, which would contain all the php/web code for your project.
The volume mounts in the docker-compose.yml file will load those into the container for you. Volume mounts work by mapping host folder path:container folder path when something changes in one, it is updated in the other, almost as if copy/pasting. Keep in mind you may need to update file permissions.
For the Mariadb you could add another volume to map the data files in the container to your host folder. Additionally you can open the mysql port so you could interrogate the database with a tool like mysql workbench by adding a ports section for port 3306 as shown in the web section. The value for mysql_root_password will set the root user password.
You can start this up with the command docker-compose up from your project directory.
When you need to manually restart nginx (or other services) you would stop, and start the containers . you can do that with the commands:
docker-compose up - Starts containers
docker-compose down - Stops containers
If you wish to send the running container to the background (so it wont take up a terminal window) you would use: docker-compose up -d
Let me know if you have any questions or if something is unclear I'd be happy to update my answer!
| MariaDB | 52,540,785 | 10 |
I installed MariaDB 10.2.10 on CentOS 7 but it stopped running. If I do:
If I do:
systemctl restart mariadb.service
I get:
mariadb.service: main process exited, code=killed, status=6/ABRT
Failed to start MariaDB 10.2.12 database server.
Unit mariadb.service entered failed state.
What's weird is that it was running fine for more than a year and now suddenly, it won't start. It's a production server, so I have several databases there that I need; I can't just reinstall it from scratch.
What can I do to get MariaDB back up and running without losing my databases?
/var/log/messages
systemd: mariadb.service: main process exited, code=killed, status=6/ABRT
Asystemd: Failed to start MariaDB 10.2.12 database server.
Asystemd: Unit mariadb.service entered failed state.
Asystemd: mariadb.service failed.
systemd: mariadb.service holdoff time over, scheduling restart.
systemd: Starting MariaDB 10.2.12 database server...
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] /usr/sbin/mysqld (mysqld 10.2.12-MariaDB) starting as process 24976 ...
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Uses event mutexes
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Compressed tables use zlib 1.2.7
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Using Linux native AIO
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Number of pools: 1
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Using SSE2 crc32 instructions
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Completed initialization of buffer pool
mysqld: 2018-04-24 7:06:38 140100781233920 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Log sequence number at the start 1597389266 and the end 1597386521 do not match
mysqld: 2018-04-24 7:06:38 140101542774912 [ERROR] InnoDB: Database page corruption on disk or a failed file read of tablespace innodb_system page [page id: space=0, page number=5]. You may have to recover from a backup.
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: Uncompressed page, stored checksum in field1 2317626450, calculated checksums for field1: crc32 2317626450/1916606668, innodb 3858799490, page type 7 == TRX_SYS.none 3735928559, stored checksum in field2 960005681, calculated checksums for field2: crc32 2317626450/1916606668, innodb 756644076, none 3735928559, page LSN 0 1597389266, low 4 bytes of LSN at page end 1597386521, page number (if stored to page already) 5, space id (if created with >= MySQL-4.1.1 and stored already) 0
mysqld: InnoDB: Page may be a transaction system page
mysqld: 2018-04-24 7:06:38 140101542774912 [Note] InnoDB: It is also possible that your operating system has corrupted its own file cache and rebooting your computer removes the error. If the corrupt page is an index page. You can also try to fix the corruption by dumping, dropping, and reimporting the corrupt table. You can use CHECK TABLE to scan your table for corruption. Please refer to http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html for information about forcing recovery.
mysqld: 2018-04-24 7:06:38 140101542774912 [ERROR] [FATAL] InnoDB: Aborting because of a corrupt database page.
mysqld: 180424 7:06:38 [ERROR] mysqld got signal 6 ;
mysqld: This could be because you hit a bug. It is also possible that this binary
mysqld: or one of the libraries it was linked against is corrupt, improperly built,
mysqld: or misconfigured. This error can also be caused by malfunctioning hardware.
mysqld: To report this bug, see https://mariadb.com/kb/en/reporting-bugs
mysqld: We will try our best to scrape up some info that will hopefully help
mysqld: diagnose the problem, but since we have already crashed,
mysqld: something is definitely wrong and this may fail.
mysqld: Server version: 10.2.12-MariaDB
mysqld: key_buffer_size=134217728
mysqld: read_buffer_size=131072
mysqld: max_used_connections=0
mysqld: max_threads=153
mysqld: thread_count=0
mysqld: It is possible that mysqld could use up to
mysqld: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467244 K bytes of memory
mysqld: Hope that's ok; if not, decrease some variables in the equation.
mysqld: Thread pointer: 0x0
mysqld: Attempting backtrace. You can use the following information to find out
mysqld: where mysqld died. If you see no messages after this, something went
mysqld: terribly wrong...
mysqld: stack_bottom = 0x0 thread_stack 0x49000
mysqld: /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x557c611fdc4e]
mysqld: /usr/sbin/mysqld(handle_fatal_signal+0x355)[0x557c60c88825]
mysqld: /lib64/libpthread.so.0(+0xf5e0)[0x7f6bee7195e0]
mysqld: /lib64/libc.so.6(gsignal+0x37)[0x7f6becc261f7]
mysqld: /lib64/libc.so.6(abort+0x148)[0x7f6becc278e8]
mysqld: /usr/sbin/mysqld(+0x9a5643)[0x557c60fc2643]
mysqld: /usr/sbin/mysqld(+0x9e7289)[0x557c61004289]
mysqld: /usr/sbin/mysqld(+0xa0adfa)[0x557c61027dfa]
mysqld: /usr/sbin/mysqld(+0x9e8986)[0x557c61005986]
mysqld: /usr/sbin/mysqld(+0x9888af)[0x557c60fa58af]
mysqld: /usr/sbin/mysqld(+0x9575ab)[0x557c60f745ab]
mysqld: mysys/stacktrace.c:268(my_print_stacktrace)[0x557c60f76d87]
mysqld: buf/buf0buf.cc:6026(buf_page_io_complete(buf_page_t*, bool))[0x557c60e62cf9]
mysqld: buf/buf0rea.cc:227(buf_read_page(page_id_t const&, page_size_t const&))[0x557c60c8ace4]
mysqld: srv/srv0start.cc:896(srv_undo_tablespaces_init(bool))[0x557c60b0b3c0]
mysqld: /usr/sbin/mysqld(_Z11plugin_initPiPPci+0x9a2)[0x557c60b0cc72]
mysqld: handler/ha_innodb.cc:4381(innobase_init(void*))[0x557c60a66be8]
mysqld: sql/handler.cc:520(ha_initialize_handlerton(st_plugin_int*))[0x557c60a6b2d1]
mysqld: sql/sql_plugin.cc:1411(plugin_initialize(st_mem_root*, st_plugin_int*, int*, char**, bool))[0x7f6becc12c05]
mysqld: sql/mysqld.cc:5258(init_server_components())[0x557c60a5f51d]
mysqld: The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
mysqld: information that should help you find out what is causing the crash.
systemd: mariadb.service: main process exited, code=killed, status=6/ABRT
systemd: Failed to start MariaDB 10.2.12 database server.
systemd: Unit mariadb.service entered failed state.
systemd: mariadb.service failed.
| You can try to force recovery through mysql which should solve the problem if you have no hardware problems on disc.
[Note] InnoDB: It is also possible that your operating system has
corrupted its own file cache and rebooting your computer removes the
error. If the corrupt page is an index page. You can also try to fix
the corruption by dumping, dropping, and reimporting the corrupt
table. You can use CHECK TABLE to scan your table for corruption.
Please refer to http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html for information about forcing recovery.
See also https://www.percona.com/blog/2016/01/19/dealing-with-corrupted-innodb-data/
| MariaDB | 50,003,447 | 10 |
I am trying to deploy a mariadb image on openshift origin. I am using mariadb:10.2.12 in my docker file. It works ok on local but I get following error when I try to deploy on openshift origin.
Initializing database
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
Cannot change ownership of the database directories to the 'mysql'
user. Check that you have the necessary permissions and try again.
The chown command comes from mariadb:10.2.12 Docker file.
Initially I had the issue of root user which is not allowed on openshift origin, so now I am using
USER mysql
in the docker file. Now I don't get warning of running as root but still openshift origin don't like chown. Remember I am not the admin of origin, only a user. My docker file is as follows:
FROM mariadb:10.2.12
ENV MYSQL_DATABASE="db_profile"
COPY ./my.cnf /etc/mysql/my.cnf
COPY ./db_profile.sql /docker-entrypoint-initdb.d/
USER mysql
EXPOSE 3306
and on local I run it as follows:
docker build . -t laeeq/ligandprofiledb:0.0.1
docker run --name test-mysql -e MYSQL_ROOT_PASSWORD=mypassword -d laeeq/ligandprofiledb:0.0.1
Is there a workaround to solve this chown problem?
| The MariaDB images on DockerHub don't follow good practice of not requiring to be run as root user.
You should instead use the MariaDB images provided by OpenShift. Eg:
centos/mariadb-102-centos7
See:
https://github.com/sclorg/mariadb-container
There should be an ability to select MariaDB from the service catalog browser in the OpenShift web console, or use the mariadb template from the command line.
| MariaDB | 48,306,277 | 10 |
I have XAMPP installed on Windows, and MySQL setup.
I was wondering how I could query my database from C#.
I can already connect using MySql.Data.MySqlClient.MySqlConnection.
I am looking for a string in the database, and if it is there, popup a messagebox saying Found!. How would I do this?
| Here is a sample code to make application connect to your Database
string m_strMySQLConnectionString;
m_strMySQLConnectionString = "server=localhost;userid=root;database=dbname";
Function to get String value from DB
private string GetValueFromDBUsing(string strQuery)
{
string strData = "";
try
{
if (string.IsNullOrEmpty(strQuery) == true)
return string.Empty;
using (var mysqlconnection = new MySqlConnection(m_strMySQLConnectionString))
{
mysqlconnection.Open();
using (MySqlCommand cmd = mysqlconnection.CreateCommand())
{
cmd.CommandType = CommandType.Text;
cmd.CommandTimeout = 300;
cmd.CommandText = strQuery;
object objValue = cmd.ExecuteScalar();
if (objValue == null)
{
cmd.Dispose();
return string.Empty;
}
else
{
strData = (string)cmd.ExecuteScalar();
cmd.Dispose();
}
mysqlconnection.Close();
if (strData == null)
return string.Empty;
else
return strData;
}
}
}
catch (MySqlException ex)
{
LogException(ex);
return string.Empty;
}
catch (Exception ex)
{
LogException(ex);
return string.Empty;
}
finally
{
}
}
Your Function code in Button Click Event
try
{
string strQueryGetValue = "select columnname from tablename where id = '1'";
string strValue = GetValueFromDBUsing(strQueryGetValue );
if(strValue.length > 0)
{
MessageBox.Show("Found");
MessageBox.Show(strValue);
}
else
MessageBox.Show("Not Found");
}
catch(Exception ex)
{
MessageBox.Show(ex.Message.ToString());
}
| MariaDB | 44,719,432 | 10 |
I have a table that contains a list of records. Each iteration, a random set of these must be picked, starting from a specific offset. Each row has a chance to be picked (so e.g. new or not often picked rows are picked more).
However, something doesn't work, causing rows to be returned that do not satisfy a condition using an aliased rand().
I'm attempting to use the following query:
select
id,
probability,
rand() rolledChance
from records
where id > :offset
having rolledChance < probability;
Where :offset is a prepared statement parameter, and is the last scanned id in the last iteration for this user.
On a table created like this (which is the relevant subset of the table):
CREATE TABLE records (id INT, probability FLOAT);
Where probability is a value between 0 and 1 on the table records. However, this returns rows where the condition does not satisfy. I checked this with the following query:
select
*,
x.rolledChance < x.probability shouldPick
from
(select
id,
probability,
rand() rolledChance
from records
having rolledChance < probability
) x;
A few rows returned are:
id probability rolledChance shouldPick
12 0.546358 0.015139976530466207 1
26 0.877424 0.9730734508233829 0
46 0.954425 0.35213605347288407 1
When I repurpose the second query as follows, it works as expected, and only returns rows where rolledChance is actually lower than probability:
select
*,
x.rolledChance < x.probability shouldPick
from
(select id, probability, rand() rolledChance from records) x
where rolledChance < probability;
So what am I missing? Are the probability and rolledChance used differently than I thought in the comparison? Is the rand() evaluated every time the alias is used in the same query?
Version output: mysql Ver 15.1 Distrib 10.0.28-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2, running on Debian Jessie.
| I think the problem is that HAVING is applied after GROUP BY, but still before the SELECT phase. I realise it's confusing because the HAVING clause references a column from the SELECT statement, but I think it basically just executes whatever is in the SELECT statement twice - once for the having, and then again for the SELECT.
Eg, see this answer.
Note, it's especially confusing because if you refer to a column name that doesn't appear in the SELECT statement in a HAVING clause it'll throw an error.
Eg, this fiddle
But as per that fiddle above, it'll still let you actually filter based on the result of a function that doesn't appear in the output. Long story short, the HAVING clause is still doing what you want, but you can't both filter on a random value and display it at the same time using that approach. If you need to do that, you need to use a subquery to fix the value first, then the outer query can filter and display on it.
Also, to make it clear, it's probably worth just using RAND() in the having clause, not the SQL part. Though I get that this question is asking why it's doing this rather than trying to solve the problem specifically.
| MariaDB | 44,496,024 | 10 |
I have installed MariaDB on Ubuntu LTS 16.04. Then I have run
/usr/bin/mysql_secure_installation
and set a root password. Accessing the DB via mysql -u root -p works fine. But checking the status with service mysql status opens a log file with this warning:
[Warning] 'user' entry 'root@localhost' has both a password and an authentication plugin specified. The password will be ignored.
The questions are:
Is this a worry or completely normal?
If this is a worry, how can I fix it?
| It is normal, if by saying "accessing the DB via mysql -u root -p works fine" you mean that you are running it while being a system root (or under sudo). You should not be able to do it as an ordinary user.
Packages generated by Ubuntu by default have unix_socket authentication for the local root. To check, run
SELECT user, host, plugin FROM mysql.user;
You should see unix_socket in the plugin column for root@localhost.
If you want to use the password authentication instead, run
UPDATE mysql.user SET plugin = '' WHERE plugin = 'unix_socket';
FLUSH PRIVILEGES;
| MariaDB | 43,439,111 | 10 |
Not quite sure how to ask this question so if someone wants to edit to better articulate please. However I want to join on a user table however the row has two FKs from the user table
item_tbl
id | ownerId | lastModifiedById | itemName
------------------------------------------
1 | 1 | 2 | "Blog Post"
user_tbl
id | username
-------------
1 | John
2 | Sally
Desired output (or something like it)
Owner Username | last modified by | item
----------------------------------------------
John | Sally | "Blog Post"
currently i'm doing two queries to get this information. Is there a better (read: more efficient) way?
| SELECT user_tbl.username Owner, a.username Modifier, item_tbl.itemName
FROM item_tbl
JOIN user_tbl
ON item_tbl.ownerId = user_tbl.id
JOIN user_tbl a
ON item_tbl.lastModifiedById = a.id;
worked for those curious as hinted at by Drew in comments
| MariaDB | 32,622,252 | 10 |
I already use mariadb as a mysql server. However, I'm not sure using a node package designed for mysql is a good idea to use for mariadb.
There are actually two node packages:
For mscdex/node-mariasql
For mysqljs/mysql
I currently use mysql, since it seems pretty mature and maintained.
It also seems to work well with mariadb on my side.
Are there any restrictions/incompatibilities/security issues to use mysqljs/mysql over mscdex/node-mariasql for a mariadb server?
Thanks
| No, it doesn't matter which you use. MariaDB is backwards compatible with MySQL. You could even connect to MySQL with node-mariasql if you wanted.
| MariaDB | 23,226,334 | 10 |
A few days ago I read about wide-column stored type of NoSQL and
exclusively Apache-Cassandra.
What I understand is that Cassandra consist of:
A keyspace(like database in relational databases) and supporting many column families or tables (Same as table in relational databases) and unlimited rows.
From Stackoverflow tags:
A wide column store is a type of key-value database. It uses tables, rows, and columns, but unlike a relational database, the names and format of the columns can vary from row to row in the same table.
In Cassandra all of the rows (in a table) should have a row key then each row key can have multiple columns.
I read about differences in implementation and storing data of Relational database and NoSQL (Cassandra).
But I don't understand the difference between structure:
Imagine a scenario which I have a table (or column family in Cassandra):
When I execute a query (CQL) like this :
select * from users;
It gives me the result as you can see :
lastname | age | city | email
----------+------+---------------+----------------------
Doe | 36 | Beverly Hills | [email protected]
Jones | 35 | Austin | [email protected]
Byrne | 24 | San Diego | [email protected]
Smith | 46 | Sacramento | null
Jones2 | null | Austin | [email protected]
So I perform the above scenario in relational database (MS SQL) with the following query:
select * from [users]
And the result is:
lastname | age | city | email
----------+------+---------------+----------------------
Doe | 36 | Beverly Hills | [email protected]
Jones | 35 | Austin | [email protected]
Byrne | 24 | San Diego | [email protected]
Smith | 46 | Sacramento | NULL
Jones2 | NULL | Austin | [email protected]
I know that Cassandra supports dynamic column and I can perform this by using sth like:
ALTER TABLE users ADD website varchar;
But it is available in relational model for example in mssql the above code can be implemented too. Something like:
ALTER TABLE users ADD website varchar(MAX);
What I see is that the first select and second select result is the same.
In Cassandra , they just give a row key (lastname) as a standalone object but it is same as a unique field (like ID or a text) in mssql (and all relational databases) and I see the type of column in Cassandra is static (in my example varchar) unlike what it describes in Stackoverflow tag.
So my questions is:
Is there any misunderstanding in my imagination about Cassandra?!
So what is different between two structure ?! I show you the result is same.
Is there any special scenarios (JSON like) that cannot be implemented in relational databases but Cassandra supports? (For example I know that nested column doesn't support in Cassandra.)
Thank you for reading.
| We have to look at more complex example to see the differences :)
For start:
column family term was used in older Thrift API
in newer CQL API,
the term table is used
Table is defined as "two-dimensional view of a multi-dimensional column family".
The term "wide-rows" was related mainly to the Thrift API. In cql it is defined a bit differently, but underneath looks the same.
Comparing SQL and CQL. In SQL table is a set of rows. In simple example it looks like in CQL it is the same, but it is not. CQL table is a set of partitions, where each partition can be just a single row (e.g. when you don't have a clustering key) or multiple rows. Partition containing multiple rows is in Thrift therminology named "wide-row". To see how it is stored underneath, please read e.g. part about composite-keys from here.
There are more differences:
CQL can have static columns which are stored on partition level - it
seems that every row in partition have a common value, but really it
is a single value stored on upper level. It can be used also to model 1:N relations
In CQL you can have collection type columns - set, list, map
Column can contain a user defined type (you can define e.g. address as type, and reuse this type in many places), or collection
can be a collection of user defined types
But also CQL does not support JOINs which are available in SQL, and you have to structure your tables very carefully, since they have to
be strictly query oriented (in cassandra you can't query data by any
column value, secondary indexes also have many limitations). It is
usually said that in relational model you model tables clearly basing
on data, when in cassandra you model basing on queries.
I hope I was able to make it a bit more clear for you. I recommend watching some vidoes (or reading slides) from Datastax Core Concepts Course as solid introduction to Cassandra.
| Cassandra | 36,210,321 | 24 |
How does one create the first user in a cassandra database?
I tried:
CREATE USER username WITH PASSWORD "";
and its says:
Bad Request: Only superusers are allowed to perform CREATE USER queries
But I have never created a user before this attempt, so how do you create the first user in a cassandra database?
This seems a little strange because it's like a chicken and egg problem, but people use Cassandra so I am sure there must be a solution somewhere.
| Once you have enabled Authentication and Authorization, you can log-in (to your local Cassandra instance) as the default Cassandra admin user like this:
./cqlsh localhost -u cassandra -p cassandra
If you are running Cassandra on a Windows Server, I believe you need to invoke it with Python:
python cqlsh localhost -u cassandra -p cassandra
Once you get in, your first task should be to create another super user account.
CREATE USER dba WITH PASSWORD 'bacon' SUPERUSER;
Next, it is a really good idea to set the current Cassandra super user's password to something else...preferably something long and incomprehensible. With your new super user, you shouldn't need the default Cassandra account again.
ALTER USER cassandra WITH PASSWORD 'dfsso67347mething54747long67a7ndincom4574prehensi562ble';
For more information, check out this DataStax article: A Quick Tour of Internal Authentication and Authorization Security in DataStax Enterprise and Apache Cassandra
| Cassandra | 22,213,786 | 24 |
I would like to create keyspaces and column-families at the start of my Cassandra container.
I tried the following in a docker-compose.yml file:
# shortened for clarity
cassandra:
hostname: my-cassandra
image: my/cassandra:latest
command: "cqlsh -f init-database.cql"
The image my/cassandra:latest contains init-database.cql in /. But this does not seem to work.
Is there a way to make this happen ?
| I was also searching for the solution to this question, and here is the way how I accomplished it.
Here the second instance of Cassandra has a volume with the schema.cql and runs CQLSH command
My Version with healthcheck so we can get rid of sleep command
version: '2.2'
services:
cassandra:
image: cassandra:3.11.2
container_name: cassandra
ports:
- "9042:9042"
environment:
- "MAX_HEAP_SIZE=256M"
- "HEAP_NEWSIZE=128M"
restart: always
volumes:
- ./out/cassandra_data:/var/lib/cassandra
healthcheck:
test: ["CMD", "cqlsh", "-u cassandra", "-p cassandra" ,"-e describe keyspaces"]
interval: 15s
timeout: 10s
retries: 10
cassandra-load-keyspace:
container_name: cassandra-load-keyspace
image: cassandra:3.11.2
depends_on:
cassandra:
condition: service_healthy
volumes:
- ./src/main/resources/cassandra_schema.cql:/schema.cql
command: /bin/bash -c "echo loading cassandra keyspace && cqlsh cassandra -f /schema.cql"
NetFlix Version using sleep
version: '3.5'
services:
cassandra:
image: cassandra:latest
container_name: cassandra
ports:
- "9042:9042"
environment:
- "MAX_HEAP_SIZE=256M"
- "HEAP_NEWSIZE=128M"
restart: always
volumes:
- ./out/cassandra_data:/var/lib/cassandra
cassandra-load-keyspace:
container_name: cassandra-load-keyspace
image: cassandra:latest
depends_on:
- cassandra
volumes:
- ./src/main/resources/cassandra_schema.cql:/schema.cql
command: /bin/bash -c "sleep 60 && echo loading cassandra keyspace && cqlsh cassandra -f /schema.cql"
P.S I found this way at one of the Netflix Repos
| Cassandra | 40,443,617 | 23 |
I'm having a weir error trying to read data from a Cassandra table. I have a single-node installation, with the default setup. This is the query I'm making:
SELECT component_id,
reading_1,
reading_2,
reading_3,
date
FROM component_readings
WHERE park_id=2
AND component_id IN (479)
AND date >= '2016-04-09+0000'
AND date <= '2016-05-08+0000';
component_readings is a simple table, with no clustering conditions:
CREATE TABLE component_readings (
park_id int,
component_id int,
date timestamp,
reading_1 decimal,
reading_2 decimal,
...
PRIMARY KEY ((park_id), component_id, date)
);
With some component_id values, it works, and with another values, it fails. This is the error I'm getting:
cassandra.ReadFailure: code=1300 [Replica(s) failed to execute read]
message="Operation failed - received 0 responses and 1 failures"
info={'required_responses': 1, 'received_responses': 0, 'failures': 1,
'consistency': 'LOCAL_ONE'}
And the cassandra's system.log shows this error:
ERROR [SharedPool-Worker-1] 2016-05-09 15:33:58,872 StorageProxy.java:1818 -
Scanned over 100001 tombstones during query 'SELECT * FROM xrem.component_readings
WHERE park_id, component_id = 2, 479 AND date >= 2016-04-09 02:00+0200 AND date <=
2016-05-08 02:00+0200 LIMIT 5000' (last scanned row partion key was ((2, 479),
2016-05-04 17:30+0200)); query aborted
The weird thing is that I get the error only when making the query from an external program (via the python cassandra-connector). If I make it directly in the cqlsh shell, it works perfectly.
My installation was cassandra 2.2, but I've upgraded to 3.5, and I get the same error.
| You are exceeding the tombstone_failure_threshold. It defaults to 100'000. You can either
increase the value in the cassandra.yaml or
clean up your tombstones
To do the latter alter your table and set the gc_grace_seconds to 0:
ALTER TABLE component_readings WITH GC_GRACE_SECONDS = 0;
Then trigger a compaction via the nodetool. This will flush out all tombstones.
In your particular scenario of a one-node-cluster you could leave the GC_GRACE_SECONDS at zero. But if you do, keep in mind to undo this if you ever want to use more than one node!
| Cassandra | 37,114,455 | 23 |
I cant find it in cassandra.yaml, maybe nodetool can get me the configured replication factor of my cluster?
What is the default value of the replication factor?
| A cluster doesn't have a replication factor, however your keyspaces does.
If you want to look at the replication factor of a given keyspace, simply execute SELECT * FROM system_schema.keyspaces; and it will print all replication information you need.
| Cassandra | 34,859,635 | 23 |
How can I write a query to find all records in a table that have a null/empty field? I tried tried the query below, but it doesn't return anything.
SELECT * FROM book WHERE author = 'null';
| null fields don't exist in Cassandra unless you add them yourself.
You might be thinking of the CQL data model, which hides certain implementation details in order to have a more understandable data model. Cassandra is sparse, which means that only data that is used is actually stored. You can visualize this by adding in some test data to Cassandra through CQL.
cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1 } ;
cqlsh> use test ;
cqlsh:test> CREATE TABLE foo (name text, age int, pet text, primary key (name)) ;
cqlsh:test> insert into foo (name, age, pet) values ('yves', 81, 'german shepherd') ;
cqlsh:test> insert into foo (name, pet) values ('coco', 'ferret') ;
cqlsh:test> SELECT * FROM foo ;
name | age | pet
-----+-----+------------------
coco | null | ferret
yves | 81 | german shepherd
So even it appears that there is a null value, the actual value is nonexistent -- CQL is showing you a null because this makes more sense, intuitively.
If you take a look at the table from the Thrift side, you can see that the table contains no such value for coco's age.
$ bin/cassandra-cli
[default@unknown] use test;
[default@test] list foo;
RowKey: coco
=> (name=, value=, timestamp=1389137986090000)
=> (name=age, value=00000083, timestamp=1389137986090000)
-------------------
RowKey: yves
=> (name=, value=, timestamp=1389137973402000)
=> (name=age, value=00000051, timestamp=1389137973402000)
=> (name=pet, value=6765726d616e207368657068657264, timestamp=1389137973402000)
Here, you can clearly see that yves has two columns: age and pet, while coco only has one: age.
| Cassandra | 20,981,075 | 23 |
I want to experiment with using Cassandra as an event store in an event sourcing application. My requirements for an event store are quite simple. The event 'schema' would be something like this:
id: the id of an aggregate root entity
data: the serialized event data (e.g. JSON)
timestamp: when the event occurred
sequence_number: the unique version of the event
I am completely new to Cassandra so forgive me for my ignorance in what I'm about to write. I only have two queries that I'd ever want to run on this data.
Give me all events for a given aggregate root id
Give me all events for a given aggregate root if where sequence number is > x
My idea is to create a Cassandra table in CQL like this:
CREATE TABLE events (
id uuid,
seq_num int,
data text,
timestamp timestamp,
PRIMARY KEY (id, seq_num) );
Does this seem like a sensible way to model the problem? And, importantly, does using a compound primary key allow me to efficiently perform the queries I specified? Remember that, given the use case, there could be a large number of events (with a different seq_num) for the same aggregate root id.
My specific concern is that the second query is going to be inefficient in some way (I'm thinking about secondary indexes here...)
| Your design seem to be well modeled in "cassandra terms". The queries you need are indeed supported in "composite key" tables, you would have something like:
query 1: select * from events where id = 'id_event';
query 2: select * from events where id = 'id_event' and seq_num > NUMBER;
I do not think the second query is going to be inefficient, however it may return a lot of elements... if that is the case you could set a "limit" of events to be returned. If that is possible you can use the limit keyword.
Using composite keys seems like a good match for your specific requirements. Using "secondary indexes" do not seem to bring much to the table... unless I miss something in your design/requirements.
HTH.
| Cassandra | 19,321,682 | 23 |
What is the difference between:
a) nodetool rebuild
b) nodetool repair [-pr]
In other words, what exactly do the respective commands do?
| nodetool rebuild: is similar to the bootstrapping process (when you add a new node to the cluster) but for a datacenter. The process here is mainly a streaming from the already live nodes to the new nodes (the new ones are empty). So after defining the key ranges for the nodes which is very fast, the rest can be seen as a copy operation.
nodetool repair -pr: is not a copy operation, the node being repaired is not empty, it already contains data but if the replication factor is greater than 1 that data needs to be compared to the data on the rest of the replicas and if there is a difference it will be corrected. The process involves a lot of streaming but it is not data streaming: the node being repaired requests a merkle tree (basically a tree of hashes) in order to verify if the information both nodes have is the same or not, if not it requests a full stream of the section of the data that has any difference (so all the replicas have the same data). Streaming this hashes if faster than streaming the whole data before verification, this works under the assumption that most data will be the same on both nodes except for some differences here and there. This process also removes tombstones created when deleting from the database, defining like a new "checkpoint" after which new tombstones will be created upon deletion of data, but the old ones will not be used anymore.
Hope it helps!
| Cassandra | 17,602,125 | 23 |
Is there an easy way to check if table (column family) is defined in Cassandra using CQL (or API perhaps, using com.datastax.driver)?
Right now I am leaning towards executing SELECT 1 FROM table and checking for exception but maybe there is a better way?
| As of 1.1 you should be able to query the system keyspace, schema_columnfamilies column family. If you know which keyspace you want to check, this CQL should list all column families in a keyspace:
SELECT columnfamily_name
FROM schema_columnfamilies WHERE keyspace_name='myKeyspaceName';
The report describing this functionality is here: https://issues.apache.org/jira/browse/CASSANDRA-2477
Although, they do note that some of the system column names have changed between 1.1 and 1.2. So you might have to mess around with it a little to get your desired results.
Edit 20160523 - Cassandra 3.x Update:
Note that for Cassandra 3.0 and up, you'll need to make a few adjustments to the above query:
SELECT table_name
FROM system_schema.tables WHERE keyspace_name='myKeyspaceName';
| Cassandra | 16,016,946 | 23 |
I have Ubuntu 12.04 with cassandra 1.1.3 (tarball installation), When I try to start cassandra, I get the following:
user@ubuntu:~/apache-cassandra-1.1.3/bin$ sudo ./cassandra -f
xss = -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms4G -Xmx4G -Xmn800M -XX: +HeapDumpOnOutOfMemoryError -Xss128k
user@ubuntu:~/apache-cassandra-1.1.3/bin$
According to cassandra documentation, the output does not look as expected:
The service should start in the foreground and log gratuitously to
standard-out. Assuming you don't see messages with scary words like
"error", or "fatal", or anything that looks like a Java stack trace,
then chances are you've succeeded.
So, what is the problem?
| The problem may be caused by using OpenJDK, as described in a Cassandra bug report but, see the comments here for occurrences of this issue on Sun/Oracle and other JVMs:
https://issues.apache.org/jira/browse/CASSANDRA-2441
If you cannot install the Oracle JVM, then try changing the stack size in the conf/cassandra-env.sh configuration script. Look for the following section, at around line 185, and change the -Xss180k to a higher value.
if [ "`uname`" = "Linux" ] ; then
# reduce the per-thread stack size to minimize the impact of Thrift
# thread-per-client. (Best practice is for client connections to
# be pooled anyway.) Only do so on Linux where it is known to be
# supported.
# u34 and greater need 180k
JVM_OPTS="$JVM_OPTS -Xss180k"
fi
echo "xss = $JVM_OPTS"
I have used 280k successfully when testing installations on Ubuntu servers at Rackspace and Amazon.
Based on reports in the comments below, I would either suggest increasing the stack size in 20k increments, starting with -Xss200k, until Cassandra starts properly. Note that it is also possible to remove this option and use the default stack size per thread, but be aware of the impact this will have on memory consumption.
| Cassandra | 11,901,421 | 23 |
Is Cassandra's data stored only in the /var/lib/cassandra folder as mentioned in the cassandra.yaml file?
Or is there any other location where Cassandra data is stored?
| You can change the data storage location in the cassandra.yaml file, if you don't want data stored in /var/lib. See DataStax's Guide for Configuring Cassandra for a full explanation of the config file. In particular,
> commitlog_directory
The directory where the commit log will be
stored. For optimal write performance, DataStax recommends the commit
log be on a separate disk partition (ideally a separate physical
device) from the data file directories.
> data_file_directories
The directory location where column family data
(SSTables) will be stored.
They do recommend you put the commit log one disk and the actual data on a second disk to avoid running out of space.
| Cassandra | 9,631,591 | 23 |
In Cassandra terminology, what is TimeUUID and when is it used?
| TimeUUID is a random global unique identifier. 16 bytes.
Sample hex presentation: a4a70900-24e1-11df-8924-001ff3591711
See http://en.wikipedia.org/wiki/Universally_Unique_Identifier
It may serve as a primary key in terms of relational database or when you need to store a list of values under some key.
For example check this open source twitter example based on cassandra:
http://twissandra.com/
http://github.com/ericflo/twissandra
User = {
'a4a70900-24e1-11df-8924-001ff3591711': {
'id': 'a4a70900-24e1-11df-8924-001ff3591711',
'username': 'ericflo',
'password': '****',
},
}
Username = {
'ericflo': {
'id': 'a4a70900-24e1-11df-8924-001ff3591711',
},
}
Friends = {
'a4a70900-24e1-11df-8924-001ff3591711': {
# friend id: timestamp of when the friendship was added
'10cf667c-24e2-11df-8924-001ff3591711': '1267413962580791',
'343d5db2-24e2-11df-8924-001ff3591711': '1267413990076949',
'3f22b5f6-24e2-11df-8924-001ff3591711': '1267414008133277',
},
}
Here user is assigned a unique key a4a70900-24e1-11df-8924-001ff3591711 which is used to refer to the user from other places.
| Cassandra | 2,614,195 | 23 |
In several places it's advised to design our Cassandra tables according to the queries we are going to perform on them. In this article by DataScale they state this:
The truth is that having many similar tables with similar data is a good thing in Cassandra. Limit the primary key to exactly what you’ll be searching with. If you plan on searching the data with a similar, but different criteria, then make it a separate table. There is no drawback for having the same data stored differently. Duplication of data is your friend in Cassandra.
[...]
If you need to store the same piece of data in 14 different tables, then write it out 14 times. There isn’t a handicap against multiple writes.
I have understood this, and now my question is: provided that I have an existing table, say
CREATE TABLE invoices (
id_invoice int PRIMARY KEY,
year int,
id_client int,
type_invoice text
)
But I want to query by year and type instead, so I'd like to have something like
CREATE TABLE invoices_yr (
id_invoice int,
year int,
id_client int,
type_invoice text,
PRIMARY KEY (type_invoice, year)
)
With id_invoice as the partition key and year as the clustering key, what's the preferred way to copy the data from one table to another to perform optimized queries later on?
My Cassandra version:
user@cqlsh> show version;
[cqlsh 5.0.1 | Cassandra 3.5.0 | CQL spec 3.4.0 | Native protocol v4]
| You can use cqlsh COPY command :
To copy your invoices data into csv file use :
COPY invoices(id_invoice, year, id_client, type_invoice) TO 'invoices.csv';
And Copy back from csv file to table in your case invoices_yr use :
COPY invoices_yr(id_invoice, year, id_client, type_invoice) FROM 'invoices.csv';
If you have huge data you can use sstable writer to write and sstableloader to load data faster.
http://www.datastax.com/dev/blog/using-the-cassandra-bulk-loader-updated
| Cassandra | 41,448,374 | 22 |
Maybe it is a stupid question, but I'm not able to determine the size of a table in Cassandra.
This is what I tried:
select count(*) from articles;
It works fine if the table is small but once it fills up, I always run into timeout issues:
cqlsh:
OperationTimedOut: errors={}, last_host=127.0.0.1
DBeaver:
Run 1: 225,000 (7477 ms)
Run 2: 233,637 (8265 ms)
Run 3: 216,595 (7269 ms)
I assume that it hits some timeout and just aborts. The actual number of entries in the table is probably much higher.
I'm testing against a local Cassandra instance which is completely idle. I would not mind if it has to do a full table scan and is unresponsive during that time.
Is there a way to reliably count the number of entries in a Cassandra table?
I'm using Cassandra 2.1.13.
| Here is my current workaround:
COPY articles TO '/dev/null';
...
3568068 rows exported to 1 files in 2 minutes and 16.606 seconds.
Background: Cassandra supports to
export a table to a text file, for instance:
COPY articles TO '/tmp/data.csv';
Output: 3568068 rows exported to 1 files in 2 minutes and 25.559 seconds
That also matches the number of lines in the generated file:
$ wc -l /tmp/data.csv
3568068
| Cassandra | 36,744,210 | 22 |
I'm inserting into a Cassandra table with timestamp columns. The data I have comes with microsecond precision, so the time data string looks like this:
2015-02-16T18:00:03.234+00:00
However, in cqlsh when I run a select query the microsecond data is not shown, I can only see time down to second precision. The 234 microseconds data is not shown.
I guess I have two questions:
1) Does Cassandra capture microseconds with timestamp data type? My guess is yes?
2) How can I see that with cqlsh to verify?
Table definition:
create table data (
datetime timestamp,
id text,
type text,
data text,
primary key (id, type, datetime)
)
with compaction = {'class' : 'DateTieredCompactionStrategy'};
Insert query ran with Java PreparedStatment:
insert into data (datetime, id, type, data) values(?, ?, ?, ?);
Select query was simply:
select * from data;
| In an effort to answer your questions, I did a little digging on this one.
Does Cassandra capture microseconds with timestamp data type?
Microseconds no, milliseconds yes. If I create your table, insert a row, and try to query it by the truncated time, it doesn't work:
aploetz@cqlsh:stackoverflow> INSERT INTO data (datetime, id, type, data)
VALUES ('2015-02-16T18:00:03.234+00:00','B26354','Blade Runner','Deckard- Filed and monitored.');
aploetz@cqlsh:stackoverflow> SELECT * FROM data
WHERE id='B26354' AND type='Blade Runner' AND datetime='2015-02-16 12:00:03-0600';
id | type | datetime | data
----+------+----------+------
(0 rows)
But when I query for the same id and type values while specifying milliseconds:
aploetz@cqlsh:stackoverflow> SELECT * FROM data
WHERE id='B26354' AND type='Blade Runner' AND datetime='2015-02-16 12:00:03.234-0600';
id | type | datetime | data
--------+--------------+--------------------------+-------------------------------
B26354 | Blade Runner | 2015-02-16 12:00:03-0600 | Deckard- Filed and monitored.
(1 rows)
So the milliseconds are definitely there. There was a JIRA ticket created for this issue (CASSANDRA-5870), but it was resolved as "Won't Fix."
How can I see that with cqlsh to verify?
One possible way to actually verify that the milliseconds are indeed there, is to nest the timestampAsBlob() function inside of blobAsBigint(), like this:
aploetz@cqlsh:stackoverflow> SELECT id, type, blobAsBigint(timestampAsBlob(datetime)),
data FROM data;
id | type | blobAsBigint(timestampAsBlob(datetime)) | data
--------+--------------+-----------------------------------------+-------------------------------
B26354 | Blade Runner | 1424109603234 | Deckard- Filed and monitored.
(1 rows)
While not optimal, here you can clearly see the millisecond value of "234" on the very end. This becomes even more apparent if I add a row for the same timestamp, but without milliseconds:
aploetz@cqlsh:stackoverflow> INSERT INTO data (id, type, datetime, data)
VALUES ('B25881','Blade Runner','2015-02-16T18:00:03+00:00','Holden- Fine as long as nobody unplugs him.');
aploetz@cqlsh:stackoverflow> SELECT id, type, blobAsBigint(timestampAsBlob(datetime)),
... data FROM data;
id | type | blobAsBigint(timestampAsBlob(datetime)) | data
--------+--------------+-----------------------------------------+---------------------------------------------
B25881 | Blade Runner | 1424109603000 | Holden- Fine as long as nobody unplugs him.
B26354 | Blade Runner | 1424109603234 | Deckard- Filed and monitored.
(2 rows)
| Cassandra | 28,547,616 | 22 |
When experimenting with Cassandra I've observed that Cassandra writes to the following files:
/.../cassandra/commitlog/CommitLog-<id>.log
/.../cassandra/data/Keyspace1/Standard1-1-Data.db
/.../cassandra/data/Keyspace1/Standard1-1-Filter.db
/.../cassandra/data/Keyspace1/Standard1-1-Index.db
/.../cassandra/data/system/LocationInfo-1-Data.db
/.../cassandra/data/system/LocationInfo-1-Filter.db
/.../cassandra/data/system/LocationInfo-1-Index.db
/.../cassandra/data/system/LocationInfo-2-Data.db
/.../cassandra/data/system/LocationInfo-2-Filter.db
/.../cassandra/data/system/LocationInfo-2-Index.db
/.../cassandra/data/system/LocationInfo-3-Data.db
/.../cassandra/data/system/LocationInfo-3-Filter.db
/.../cassandra/data/system/LocationInfo-3-Index.db
/.../cassandra/system.log
The general structure seems to be:
/.../cassandra/commitlog/CommitLog-ID.log
/.../cassandra/data/KEYSPACE/COLUMN_FAMILY-N-Data.db
/.../cassandra/data/KEYSPACE/COLUMN_FAMILY-N-Filter.db
/.../cassandra/data/KEYSPACE/COLUMN_FAMILY-N-Index.db
/.../cassandra/system.log
What is the Cassandra file structure? More specifically, how are the data, commitlog directories used, and what is the structure of the files in the data directory (Data/Filter/Index)?
| A write to a Cassandra node first hits the CommitLog (sequential). (Then Cassandra stores values to column-family specific, in-memory data structures called Memtables. The Memtables are flushed to disk whenever one of the configurable thresholds is exceeded. (1, datasize in memtable. 2, # of objects reach certain limit, 3, lifetime of a memtable expires.))
The data folder contains a subfolder for each keyspace. Each subfolder contains three kind of files:
Data files: An SSTable (nomenclature
borrowed from Google) stands for
Sorted Strings Table and is a file of
key-value string pairs (sorted by
keys).
Index file: (Key, offset) pairs (points into data file)
Bloom filter: all keys in data file
| Cassandra | 2,359,175 | 22 |
Very new to Cassandra so apologies if the question is simple.
I created a table:
create table ApiLog (
LogId uuid,
DateCreated timestamp,
ClientIpAddress varchar,
primary key (LogId, DateCreated));
This work fine:
select * from apilog
If I try to add a where clause with the DateCreated like this:
select * from apilog where datecreated <= '2016-07-14'
I get this:
Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING
From other questions here on SO and from the tutorials on datastax it is my understanding that since the datecreated column is a clustering key it can be used to filter data.
I also tried to create an index but I get the same message back. And I tried to remove the DateCreated from the primary key and have it only as an index and I still get the same back:
create index ApiLog_DateCreated on dotnetdemo.apilog (datecreated);
| The partition key LogId determines on which node each partition will be stored. So if you don't specify the partition key, then Cassandra has to filter all the partitions of this table on all the nodes to find matching data. That's why you have to say ALLOW FILTERING, since that operation is very inefficient and is discouraged.
If you specify a specific LogId, then Cassandra can find the partition on a single node and efficiently do a range query by the clustering key.
So you need to plan your schema such that you can do your range queries within a single partition and not have to do a full table scan like you're trying to do.
| Cassandra | 38,350,656 | 21 |
While using the C/C++ driver of Cassandra, I at times see these kind of messages popping up in my console:
1460937092.140 [WARN] (src/response.cpp:51:char*
cass::Response::decode_warnings(char*, size_t)):
Server-side warning: Aggregation query used without partition key
Wondering whether someone knows what that means. What should I be looking for in my code that could generate this error, or is it just something on the server side that I have no control over?
| That warning is telling you that you are doing a select using a user defined aggregate without a partition key. That may be one that is built in like avg, count, min, max or could've one of your own.
An example:
select avg(temperature) from weather_data;
Vs
select avg(temperature) from weather_data where id = 1;
The first example would scan all rows of data in the cluster and could be a serious performance hit. If there are enough rows, the query could time out.
The second will only scan a single partition of data which keeps the query to one server and is the recommended usage.
| Cassandra | 36,683,533 | 21 |
As stated in this doc to select a range of rows i have to write this:
select first 100 col1..colN from table;
but when I launch this on cql shell I get this error:
<ErrorMessage code=2000 [Syntax error in CQL query] message="line 1:13 no viable alternative at input '100' (select [first] 100...)">
What's wrong?
| According to the Docs, key word first is to limit the number of Columnns, not rows
to limit the number of rows , you must just keyword limit.
select col1..colN from table limit 100;
the default limit is 10000
| Cassandra | 30,861,162 | 21 |
In SQL, I am able to do:
select getdate(), getdate() - 7
Which returns the current date as well as current date - 7 days. I want to achieve the same in Cassandra CQL. I tried:
select dateof(now())
But that does not work. It works only on insert and not in select. How can I get the same? Any help would be appreciated.
| select dateof(now())
On its own, you are correct, that does not work. But if you have a table that you know only has one row (like system.local):
aploetz@cqlsh:stackoverflow> SELECT dateof(now()) FROM system.local ;
dateof(now())
--------------------------
2015-03-26 03:18:39-0500
(1 rows)
Unfortunately, Cassandra CQL does not (yet? CASSANDRA-5505) include support for arithmetic operations, let alone date arithmetic. So subtracting 7 days from that value is something that you would have to do in your application level.
Edit 20200422
The newer syntax uses the toTimestamp() function instead:
aploetz@cqlsh> SELECT toTimestamp(now()) FROM system.local;
system.totimestamp(system.now())
----------------------------------
2020-04-22 13:22:04.752000+0000
(1 rows)
Both syntaxes work as of 20200422.
| Cassandra | 29,272,691 | 21 |
I added a bottle server that uses python's cassandra library, but it exits with this error: Bottle FATAL Exited too quickly (process log may have details) log shows this: File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 1765, in _reconnect_internal
raise NoHostAvailable("Unable to connect to any servers", errors)So I tried to run it manually using supervisorctl start Bottle ,and then it started with no issue. The conclusion= Bottle service starts too fast (before the needed cassandra supervised service does): a delay is needed!
| This is what I use:
[program:uwsgi]
command=bash -c 'sleep 5 && uwsgi /etc/uwsgi.ini'
| Cassandra | 26,342,693 | 21 |
I'm building a backup and restore process for a Cassandra database so that it's ready when I need it, and so that I understand the details in order to build something that will work for production. I'm following Datastax's instructions here:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_backup_restore_c.html.
As a start, I'm seeding the database on a dev box then attempting to make the backup/restore work. Here's the backup script:
#!/bin/bash
cd /opt/apache-cassandra-2.0.9
./bin/nodetool clearsnapshot -t after_seeding makeyourcase
./bin/nodetool snapshot -t after_seeding makeyourcase
cd /var/lib/
tar czf after_seeding.tgz cassandra/data/makeyourcase/*/snapshots/after_seeding
Yes, tar is not the most efficient way, perhaps, but I'm just trying to get something working right now. I've checked the tar, and all the files are there.
Once the database is backed up, I shut down Cassandra and my app, then rm -rf /var/lib/cassandra/ to simulate a complete loss.
Now to restore the database. Restoration "Method 2" from http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html is more compatible with my schema-creation component than Method 1.
So, Method 2/Step 1, "Recreate the schema": Restart Cassandra, then my app. The app is built to re-recreate the schema on startup when necessary. Once it's up, there's a working Cassandra node with a schema for the app, but no data.
Method 2/Step 2 "Restore the snapshot": They give three alternatives, the first of which is to use sstableloader, documented at http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsBulkloader_t.html. The folder structure that the loader requires is nothing like the folder structure created by the snapshot tool, so everything has to be moved into place. Before going to all that trouble, I'll just try it out on one table:
>./bin/sstableloader makeyourcase/users
Error: Could not find or load main class org.apache.cassandra.tools.BulkLoader
Hmmm, well, that's not going to work. BulkLoader is in ./lib/apache-cassandra-2.0.9.jar, but the loader doesn't seem to be set up to work out of the box. Rather than debug the tool, let's move on to the second alternative, copying the snapshot directory into the makeyourcase/users/snapshots/ directory. This should be easy, since we're throwing the snapshot directory right back where it came from, so tar xzf after_seeding.tgz should do the trick:
cd /var/lib/
tar xzf after_seeding.tgz
chmod -R u+rwx cassandra/data/makeyourcase
and that puts the snapshot directories back under their respective 'snapshots' directories, and a refresh should restore the data:
cd /opt/apache-cassandra-2.0.9
./bin/nodetool refresh -- makeyourcase users
This runs without complaint. Note that you have to run this for each and every table, so you have to generate the list of tables first. But, before we do that, note that there's something interesting in the Cassandra logs:
INFO 14:32:26,319 Loading new SSTables for makeyourcase/users...
INFO 14:32:26,326 No new SSTables were found for makeyourcase/users
So, we put the snapshot back, but Cassandra didn't find it. I also tried moving the snapshot directory under the existing SSTables directory, and copying the old SSTable files into the existing directory, with the same error in the log. Cassandra doesn't log where it expects to find them, just that it can't find them. The docs say to put them into a directory named data/keyspace/table_name-UUID, but there is no such directory. There is one named data/makeyourcase/users/snapshots/1408820504987-users/, but putting the snapshot dir there, or the individual files, didn't work.
The third alternative, the "Node restart method" doesn't look suitable for a multi-node production environment, so I didn't try that.
Edit:
Just to make this perfectly explicit for the next person, here are the preliminary, working backup and restore scripts that apply the accepted answer.
myc_backup.sh:
#!/bin/bash
cd ~/bootstrap/apache-cassandra-2.0.9
./bin/nodetool clearsnapshot -t after_seeding makeyourcase
./bin/nodetool snapshot -t after_seeding makeyourcase
cd /var/lib/
tar czf after_seeding.tgz cassandra/data/makeyourcase/*/snapshots/after_seeding
myc_restore.sh:
#!/bin/bash
cd /var/lib/
tar xzf after_seeding.tgz
chmod -R u+rwx cassandra/data/makeyourcase
cd ~/bootstrap/apache-cassandra-2.0.9
TABLE_LIST=`./bin/nodetool cfstats makeyourcase | grep "Table: " | sed -e 's+^.*: ++'`
for TABLE in $TABLE_LIST; do
echo "Restore table ${TABLE}"
cd /var/lib/cassandra/data/makeyourcase/${TABLE}
if [ -d "snapshots/after_seeding" ]; then
cp snapshots/after_seeding/* .
cd ~/bootstrap/apache-cassandra-2.0.9
./bin/nodetool refresh -- makeyourcase ${TABLE}
cd /var/lib/cassandra/data/makeyourcase/${TABLE}
rm -rf snapshots/after_seeding
echo " Table ${TABLE} restored."
else
echo " >>> Nothing to restore."
fi
done
| Added more details:
You can run the snapshot for your particular keyspace using:
$ nodetool snapshot <mykeyspace> -t <SnapshotDirectoryName>
This will create the snapshot files inside the snapshots directory in data.
When you delete your data, make sure you don't delete the snapshots folder or you will not be able to restore it (unless you are moving it to another location / machine.)
$ pwd
/var/lib/cassandra/data/mykeyspace/mytable
$ ls
mykeyspace-mytable-jb-2-CompressionInfo.db mykeyspace-mytable-jb-2-Statistics.db
mykeyspace-mytable-jb-2-Data.db mykeyspace-mytable-jb-2-Filter.db mykeyspace-mytable-jb-2-Index.db
mykeyspace-mytable-jb-2-Summary.db mykeyspace-mytable-jb-2-TOC.txt snapshots
$ rm *
rm: cannot remove `snapshots': Is a directory
Once you are ready to restore, copy back the snapshot data into the keyspace/table directory (one for each table):
$ pwd
/var/lib/cassandra/data/mykeyspace/mytable
$ sudo cp snapshots/<SnapshotDirectoryName>/* .
You mentioned:
and that puts the snapshot directories back under their respective 'snapshots' directories, and a refresh >should restore the data:
I think the issue is that you are restoring the Snapshot data into the snapshot directory. It should go right in the table directory. Everything else seems right, let me know.
| Cassandra | 25,465,904 | 21 |
There is a great talk here about simulating partition issues in Cassandra with Kingsby's Jesper library.
My question is - with Cassandra are you mainly concerned with the Partitioning part of the CAP theorem, or is Consistency a factor you need to manage as well?
| Cassandra is typically classified as an AP system, meaning that availability and partition tolerance are generally considered to be more important than consistency. However, real world systems rarely fall neatly into these categories, so it's more helpful to view CAP as a continuum. Most systems will make some effort to be consistent, available, and partition tolerant, and many (including Cassandra) can be tuned depending on what's most important. Turning knobs like replication factor and consistency level can have a dramatic impact on C, A, and P.
Even defining what the terms mean can be challenging, as various use cases have different requirements for each. So rather than classify a system as CP, AP, or whatever, it's more helpful to think in terms of the options it provides for tuning these properties as appropriate for the use case.
Here's an interesting discussion on how things have changed in the years since the CAP theorem was first introduced.
| Cassandra | 20,205,797 | 21 |
How do I write subqueries/nested queries in cassandra. Is this facility is provided in CQL?
Example I tried:
cqlsh:testdb> select itemname from item where itemid = (select itemid from orders where customerid=1);
It just throws the following error -
Bad Request: line 1:87 no viable alternative at input ';'
| Because of its distributed nature, Cassandra has no support for RDBMS style joins. You have a few options for when you want something like a join.
One option perform separate queries and then have your application join the data itself. This makes sense if the data is relatively small and you only have to perform a small number of queries. Based on the example you gave above, this would probably be a good solution for you.
For more complicated joins, the usual strategy is to denormalize the data and store a materialized view of the join. The advantage to this is that fetching this data will be much faster than having to build it join in your application every time you need it. The cost is now you have multiple places where you are storing the same data and you will need to keep it all in sync. You can either update all your views when new data comes into the system or you can have a periodic batch job that rebuilds thems.
You might find this article useful: Do You Really Need SQL to Do It All in Cassandra? Its a bit old but its principles still apply.
| Cassandra | 16,790,297 | 21 |
I'm developing a php platform that will make huge use of images, documents and any file format that will come in my mind so i was wondering if Cassandra is a good choice for my needs.
If not, can you tell me how should i store files? I'd like to keep using cassandra because it's fault-tolerant and uses auto-replication among nodes.
Thanks for help.
| From the cassandra wiki,
Cassandra's public API is based on Thrift, which offers no streaming abilities
any value written or fetched has to fit in memory. This is inherent to Thrift's
design and is therefore unlikely to change. So adding large object support to
Cassandra would need a special API that manually split the large objects up
into pieces. A potential approach is described in http://issues.apache.org/jira/browse/CASSANDRA-265.
As a workaround in the meantime, you can manually split files into chunks of whatever
size you are comfortable with -- at least one person is using 64MB -- and making a file correspond
to a row, with the chunks as column values.
So if your files are < 10MB you should be fine, just make sure to limit the file size, or break large files up into chunks.
| Cassandra | 8,842,837 | 21 |
I just spun up a machine on EC2 running Cassandra following the instructions in the link below, but I have no idea what version it is. How do I figure this out? I know I'm missing something incredibly simple, just don't know where to look.
http://wiki.apache.org/cassandra/CloudConfig
| It might be easier to use nodetools
./nodetool -h localhost version
| Cassandra | 3,604,857 | 21 |
When I try to Insert data in Cassandra using the below query I am getting the below mentioned error
cqlsh:assign> insert into tblFiles1(rec_no,clientid,contenttype,datafiles,filename) values(1,2,'gd','dgfsdg','aww');
WriteTimeout: code=1100 [Coordinator node timed out waiting for
replica nodes' responses] message="Operation timed out - received only
0 responses." info={'received_responses': 0, 'required_responses': 1,
'consistency': 'ONE'}
My Version of Cassandra and DSE:
[cqlsh 5.0.1 | Cassandra 2.1.5.469 | DSE 4.7.0 | CQL spec 3.2.0 | Native protoco l v3]
| Increase the write_request_timeout_in_ms Timeout in the Cassandra Config file (cassandra.yaml) as
write_request_timeout_in_ms: 20000
and restart your Server
| Cassandra | 30,575,125 | 20 |
I am just getting start on Cassandra and I was trying to create tables with different partition and clustering keys to see how they can be queried differently.
I created a table with primary key of the form - (a),b,c where a is the partition key and b,c are clustering key.
When querying I noticed that the following query:
select * from tablename where b=val;
results in:
Cannot execute this query as it might involve data filtering and thus
may have unpredictable performance. If you want to execute this query
despite the performance unpredictability, use ALLOW FILTERING
And using "ALLOW FILTERING" gets me what I want (even though I've heard its bad for performance).
But when I run the following query:
select * from tablename where c=val;
It says:
PRIMARY KEY column "c" cannot be restricted (preceding column "b" is either not restricted or by a non-EQ relation)
And there is no "ALLOW FILTERING" option at all.
MY QUESTION IS - Why are all clustering keys not treated the same? column b which is adjacent to the partition key 'a' is given an option of 'allow filtering' which allows querying on it while querying on column 'c' does not seem possible at all (given the way this table is laid out).
ALLOW FILTERING gets cassandra to scan through all SSTables and get the data out of it when the partition key is missing, then why cant we do the same column c?
| It's not that clustering keys are not treated the same, it's that you can't skip them. This is because Cassandra uses the clustering keys to determine on-disk sort order within a partition.
To add to your example, assume PRIMARY KEY ((a),b,c,d). You could run your query (with ALLOW FILTERING) by specifying just b, or b and c. But it wouldn't allow you to specify c and d (skipping b) or b and d (skipping c).
And as a side node, if you really want to be able to query by only b or only c, then you should support those queries with additional tables designed as such. ALLOW FILTERING is a band-aid, and is not something you should ever do in a production Cassandra deployment.
| Cassandra | 30,486,916 | 20 |
I need to insert a new column into an existing column family via a CQL script.
I want to do something like:
alter COLUMNFAMILY rules ADD rule_template text IF NOT EXISTS;
How can I achieve this purely in CQL script?
| There is no optional "if not exists" for altering column families (tables). As a workaround you could just execute the alter command and ignore the error if the column already exists. There shouldn't be any harm in it, other than the error message.
| Cassandra | 25,728,944 | 20 |
I am trying to run the following query
SELECT edge_id, b_id FROM booking_by_edge WHERE edge_id IN ?
I bind Java list of Long's as a parameter and I get an exception
SyntaxError: line 0:-1 mismatched input '<EOF>' expecting ')' (ResultSetFuture.java:242)
If I try to use (?) it expects single Long item to be bound, but I need a collection
Is there an error in my syntax?
| Tested in Cassandra 2.1.3, the following code snippet works:
PreparedStatement prepared = session.prepare("SELECT edge_id, b_id FROM booking_by_edge WHERE edge_id IN ?;");
List<Long> edgeIds = Arrays.asList(1L, 2L, 3L);
session.execute(prepared.bind(edgeIds));
| Cassandra | 16,918,853 | 20 |
How do I stop cassandra server running on a single node in my mac os x?
Cassandra script doesn't have -stop option. Only way other than restart the mac os x, was to do a "ps" and find the java process which had arguments for cassandra and use kill -9 to kill the process.
But trying to restart cassandra after that still throws
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 7199; nested exception is:
java.net.BindException: Address already in use.
Anybody seen it? Any quick solutions?
| If you've installed cassandra via homebrew, use brew info cassandra and it will tell you how to load/unload cassandra using launchctl. This worked better for me than the other answers here.
Commands
brew info cassandra To see status of cassandra
brew services start cassandra To start cassandra
brew services stop cassandra To stop cassandra
| Cassandra | 10,877,072 | 20 |
Disclaimer: by referential Data, i do not mean referential integrity
I am learning nosql and would like to understand how data should be modeled. In a typical relational database for an CMS application, for example, you may have two tables: article and author, where article have an reference to the author.
In nosql system, you may create an article document this way since they are just disguised object graph
{
title: "Learn nosql in 5 minutes",
slug: "nosql_is_easy",
author: {firstName: "Smarty"
lastName: "Pants"
}
{
title: "Death to RDBMS",
slug: "rdbms_sucks",
author: {firstName: "Smarty"
lastName: "Pants"
}
and so on...
Say one day Mr. Smarty Pants decided to change his name to Regular Joe because nosql has become ubiquitous. In such uses case, every article would need to be scanned and the author's name updated.
So my questions is, how should the data be modeled in nosql to fit the basic use cases for an CMS so that the performance is on par or faster than RDBMS? mongodb, for example, claims CMS as an use-case ...
Edit:
Few people have already suggested normalizing the data like:
article
{
title: "Death to RDBMS",
slug: "rdbms_sucks",
author: {id: "10000001"}
}
author
{
name: "Big Brother",
id: "10000001"
}
However, since nosql, by design, lack joins, you would have to use mapreduce-like functions to bring the data together. If this is your suggestion, please comment on the performance of such operation.
Edit 2:
If you think nosql is not suitable solution for any kind of data that requires referential data, please also explain why. This would seem to make the use case for nosql rather limited since any reasonable application would contain relational data.
Edit 3:
Nosql doesn't mean non-relational
| Your data is clearly relational: an article has an author. You can model your data in a NOSQL store like MongoDB in just the same way as you would in a relational store BUT because there are no joins in the database you have to make two calls to the database so you haven't gained anything.
BUT ... what you CAN do with a NOSQL store is to denormalize the data somewhat to get improved performance (a single round trip to get everything you need to display the article) BUT at the expense of immediate consistency: trading off always accurate author names for eventually accurate author names.
You might for example, use this in your article:
author: {firstName: "Smarty", lastName: "Pants", _id:DE342624EF }
Now you can display the article really fast and when someone does change their name you can either kick off a background task to update all the existing articles or you can wait for a periodic consistency sweep to fix it.
Many major web sites no longer give you immediate consistency. There are changes that you make that are only eventually seen by the other users on the site.
| Cassandra | 7,591,943 | 20 |
Can you share your thoughts how would you implement data versioning in Cassandra.
Suppose that I need to version records in an simple address book. (Address book records are stored as Rows in a ColumnFamily).
I expect that the history:
will be used infrequently
will be used all at once to present it in a "time machine" fashion
there won't be more versions than few hundred to a single record.
history won't expire.
I'm considering the following approach:
Convert the address book to Super Column Family and store multiple version of address book records in one Row keyed (by time stamp) as super columns.
Create new Super Column Family to store old records or changes to the records.
Such structure would look as follows:
{
'address book row key': {
'time stamp1': {
'first name': 'new name',
'modified by': 'user id',
},
'time stamp2': {
'first name': 'new name',
'modified by': 'user id',
},
},
'another address book row key': {
'time stamp': {
....
Store versions as serialized (JSON) object attached in new ColumnFamilly. Representing sets of version as rows and versions as columns. (modelled after Simple Document Versioning with CouchDB)
| If you can add the assumption that address books typically have fewer than 10,000 entries in them, then using one row per address book time line in a super column family would be a decent approach.
A row would look like:
{'address_book_18f3a8':
{1290635938721704: {'entry1': 'entry1_stuff', 'entry2': 'entry2_stuff'}},
{1290636018401680: {'entry1': 'entry1_stuff_v2', ...},
...
}
where the row key identifies the address book, each super column name is a time stamp, and the subcolumns represent the address book's contents for that version.
This would allow you to read the latest version of an address book with only one query and also write a new version with a single insert.
The reason I suggest using this if address books are less than 10,000 elements is that super columns must be completely deserialized when you read even a single subcolumn. Overall, not that bad in this case, but it's something to keep in mind.
An alternative approach would be to use a single row per version of the address book, and use a separate CF with a time line row per address book like:
{'address_book_18f3a8': {1290635938721704: some_uuid1, 1290636018401680: some_uuid2...}}
Here, some_uuid1 and some_uuid2 correspond to the row key for those versions of the address book. The downside to this approach is that it requires two queries every time the address book is read. The upside is that it lets you efficiently read only select parts of an address book.
| Cassandra | 4,183,945 | 20 |
Right now I'm developing the prototype of a web application that aggregates large number of text entries from a large number of users. This data must be frequently displayed back and often updated. At the moment I store the content inside a MySQL database and use NHibernate ORM layer to interact with the DB. I've got a table defined for users, roles, submissions, tags, notifications and etc. I like this solution because it works well and my code looks nice and sane, but I'm also worried about how MySQL will perform once the size of our database reaches a significant number. I feel that it may struggle performing join operations fast enough.
This has made me think about non-relational database system such as MongoDB, CouchDB, Cassandra or Hadoop. Unfortunately I have no experience with either. I've read some good reviews on MongoDB and it looks interesting. I'm happy to spend the time and learn if one turns out to be the way to go. I'd much appreciate any one offering points or issues to consider when going with none relational dbms?
| The other answers here have focused mainly on the technical aspects, but I think there are important points to be made that focus on the startup company aspect of things:
Availabililty of talent. MySQL is very common and you will probably find it easier (and more importantly, cheaper) to find developers for it, compared to the more rarified database systems. This larger developer base will also mean more tutorials, a more active support community, etc.
Ease of development. Again, because MySQL is so common, you will find it is the db of choice for a great many systems / services. This common ground may make any external integration a little easier.
You are preparing for a situation that may never exist, and is manageable if it does. Very few businesses (nevermind startups) come close to MySQL's limits, and with all due respect (and I am just guessing here); the likelihood that your startup will ever hit the sort of data throughput to cripple a properly structured, well resourced MySQL db is almost zero.
Basically, don't spend your time ( == money) worrying about which db to use, as MySQL can handle a lot of data, is well proven and well supported.
Going back to the technical side of things... Something that will have a far greater impact on the speed of your app than choice of db, is how efficiently data can be cached. An effective cache can have dramatic effects on reducing db load and speeding up the general responsivness of an app. I would spend your time investigating caching solutions and making sure you are developing your app in such a way that it can make the best use of those solutions.
FYI, my caching solution of choice is memcached.
| Cassandra | 2,839,505 | 20 |
I am not using Elasticssearch. I am trying to perform some database operations in cassandra using CQL. I am using threads. While running the code I am always getting the exception in thread after a while : com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query.
I have tested with even one thread. The error is still there. Here is my code :
InetAddress addrOne = InetAddress.getByName("52.15.195.41");
InetSocketAddress addrSocOne = new InetSocketAddress(addrOne,9042);
CqlSession sessionOne = CqlSession.builder().addContactPoint(addrSocOne).withLocalDatacenter("us-east-2").withKeyspace("test").build();
while(counter <= 100)
{
String query = "select max(id) FROM samplequeue";
ResultSet rs = session.execute(query);
for (Row row : rs)
{
int exS = row.getInt("system.max(id)");
}
counter++;
Thread.sleep(50);
}
This is a very simple, modified example just to demonstrate the problem. I am unable to resolve it. All the threads are exiting giving the same exception. I am running cassandra 3.11.4 on AWS. All my nodes are up and running and I can perform operations finely in the backend.
| Change .withLocalDatacenter("us-east-2") to .withLocalDatacenter("datacenter1") and retry.
| Cassandra | 58,279,546 | 19 |
Could you please explain why is Cassandra not linearizable even when quorum based read and writes are used?
Linearizability defined as
If operation B started after operation A successfully completed, then operation B must see the system in the same state as it was on completion of operation A, or a newer state.
| Edit considering Cassandra foreground Read Repair:
Writes that fail because only a partial set of replicas are updated could lead to two different readers seeing two different values of data. This is because of the lack of rollbacks in simple quorum-based consistency approaches. This behavior breaks the linearizability guarantees for single-key reads. As described in this discussion, a distributed consensus protocol such as Raft or Paxos is a must-have for such a guarantee.
Also, other phenomena such as clock drift and leap second can break the Cassandra session consistency.
Earlier Answer (without considering Cassandra foreground read repair):
Summary:
In Cassandra write may not feel atomic. Some nodes get writes faster than others thus even if we rely on quorum the result depends on the set of nodes that return values and what values they hold at that point.
Also, to explain definition of linearizability adding to definition in bold
If operation B started after operation A successfully completed, then
operation B must see the system in the same state as it was on
completion of operation A, or a newer state (but never old state again) .
Copying from Martin Klepmann's Data Intensive Applications book
Linearizability and quorums
Intuitively, it seems as though strict quorum reads and writes should be linearizable in a Dynamo-style model. However, when we have variable network delays, it is possible to have race conditions, as demonstrated in Figure 9-6.
In Figure 9-6, the initial value of x is 0, and a writer client is updating x to 1 by sending the write to all three replicas (n = 3, w = 3). Concurrently, client A reads from a quorum of two nodes (r = 2) and sees the new value 1 on one of the nodes. Also concurrently with the write, client B reads from a different quorum of two nodes, and gets back the old value 0 from both.
The quorum condition is met (w + r > n), but this execution is nevertheless not linearizable: B’s request begins after A’s request completes, but B returns the old value while A returns the new value. (It’s once again the Alice and Bob situation from Figure 9-1.)
Interestingly, it is possible to make Dynamo-style quorums linearizable at the cost of reduced performance: a reader must perform read repair (see “Read repair and antientropy” on page 178) synchronously, before returning results to the application [23], and a writer must read the latest state of a quorum of nodes before sending its writes [24, 25]. However, Riak does not perform synchronous read repair due to the performance penalty [26]. Cassandra does wait for read repair to complete on quorum reads [27], but it loses linearizability if there are multiple concurrent writes to the same key, due to its use of last-write-wins conflict resolution.
Moreover, only linearizable read and write operations can be implemented in this way; a linearizable compare-and-set operation cannot, because it requires a consensus algorithm [28].
In summary, it is safest to assume that a leaderless system with Dynamo-style replication does not provide linearizability.
And a bit more explaination about Linearizability vs Serializability:
| Cassandra | 56,795,239 | 19 |
Here is the table I'm creating, this table contains information about players that played the last mundial cup.
CREATE TABLE players (
group text, equipt text, number int, position text, name text,
day int, month int, year int,
club text, liga text, capitan text,
PRIMARY key (name, day, month, year));
When doing the following query :
Obtain 5 names from the oldest players that were captain of the selection team
Here is my query:
SELECT name FROM players WHERE captain='YES' ORDER BY year DESC LIMIT 5;
And I am getting this error:
Order By only supported when partition key is restricted by EQ or IN
I think is a problem about the table I'm creating, but I don't know how to solve it.
Thanks.
| Your table definition is incorrect for the query you're trying to run.
You've defined a table with partition key "name", clustering columns "day", "month", "year", and various other columns.
In Cassandra all SELECT queries must specify a partition key with EQ or IN. You're permitted to include some or all of the clustering columns, using the equality and inequality operators you're used to in SQL.
The clustering columns must be included in the order they're defined. An ORDER BY clause can only include clustering columns that aren't already specific by an EQ, again in the order they're defined.
For example, you can write the query
select * from players where name = 'fiticida' and day < 5 order by month desc;
or
select * from players where name = 'fiticida' and day = 10 and month > 2 order by month asc;
but not
select * from players where name = 'fiticida' and year = 2017;
which doesn't include "day" or "month"
and not
select * from players where name = 'fiticida' and day = 5 order by year desc;
which doesn't include "month".
Here is the official documentation on the SELECT query.
To satisfy your query, the table needs
A partition key specified by EQ or IN: "captain" will work
An ORDER BY clause using the leftmost clustering column: put "year" to the left of "month" and "day" in your primary key definition
| Cassandra | 46,921,455 | 19 |
I'm running a single node at the moment. I'm trying to enable password authentication for Cassandra.
I'm following this guide: http://cassandra.apache.org/doc/latest/operating/security.html#password-authentication
I'll note that I didn't alter system_auth's replication as it's a single node cluster.
I edited cassandra.yaml to use authenticator: PasswordAuthenticator.
I then restarted cassandra and tried the command cqlsh -u cassandra -p cassandra, but that gives me the error:
Connection error: ('Unable to connect to any servers',
{'127.0.0.1': AuthenticationFailed(u'Failed to authenticate to 127.0.0.1:
code=0100 [Bad credentials] message="org.apache.cassandra.exceptions.
UnavailableException: Cannot achieve consistency level QUORUM"',)})
I've tried running nodetool repair but it says: Replication factor is 1. No repair is needed for keyspace 'system_auth'
How do I solve this?
| I managed to solve the problem.
I had to run ALTER KEYSPACE system_auth WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }; as it was set to {'class': 'NetworkTopologyStrategy', 'DC1': '1', 'DC2': '1'} previously, even though it was a single node cluster.
This is why it couldn't achieve a QUORUM.
| Cassandra | 44,883,940 | 19 |
I want to transfer data from one Cassandra cluster (reached via 192.168.0.200) to another Cassandra cluster (reached via 127.0.0.1). The data is 523 rows but each row is about 1 MB. I am using the COPY FROM and COPY TO command. I get the following error when I issue the COPY TO command:
Error for (8948428671687021382, 9075041744804640605):
OperationTimedOut - errors={
'192.168.0.200': 'Client request timeout. See Session.execute[_async](timeout)'},
last_host=192.168.0.200 (will try again later attempt 1 of 5).
I tried to change the ~/.cassandra/cqlshrc file to:
[connection]
client_timeout = 5000
But this hasn't helped.
| You may want to increment the request timeout (default: 10 seconds), not the connect timeout.
Try:
cqlsh --request-timeout=6000
or add:
[connection]
request_timeout = 6000
to your ~/.cassandra/cqlshrc file.
| Cassandra | 39,955,968 | 19 |
Is there any way to prettify the results of cql commands in the Linux terminal while using the cqlsh utility (cql version of Mongo .pretty())? It becomes quite difficult to read the results when the output is displayed normally, especially when there are nested documents and arrays
| Perhaps you are interested in the EXPAND command?
Usage: EXPAND ON;
From the documentation over at Datastax:
This command lists the contents of each row of a table vertically, providing a more convenient way to read long rows of data than the default horizontal format. You scroll down to see more of the row instead of scrolling to the right. Each column name appears on a separate line in column one and the values appear in column two.
Source: https://docs.datastax.com/en/dse/5.1/cql/cql/cql_reference/cqlsh_commands/cqlshExpand.html
| Cassandra | 28,949,790 | 19 |
I have a cassandra node at a machine. When I access cqlsh from the same machne it works properly.
But when I tried to connect to it's cqlsh using "192.x.x.x" from another machine, I'm getting an error saying
Connection error: ('Unable to connect to any servers', {'192.x.x.x': error(111, "Tried connecting to [('192.x.x.x', 9042)]. Last error: Connection refused")})
What is the reason for this? How can I fix it?
| Probably the remote Cassandra node is not bound to the external network interface but to the loopback one (this is the default configuration). You can ensure this by using "telnet thecassandrahost 9042" from the remote machine, it should not work.
In order to bind Cassandra to the external network interface you need to edit the cassandra.yaml configuration file and set the properties "listen_address" and "rpc_address" to your remote IP or "0.0.0.0" (not all versions of Cassandra support wildcard addresses).
Check also that the firewall is properly configured or disabled (sudo service iptables stop).
| Cassandra | 27,758,795 | 19 |
Recently I've started working on Grails integration with Cassandra using the Java driver for cassandra(cassandra-driver-core-2.0.2).
So I was curious to know how we can find out how much size our table is taking to store the data in cassandra DB.
I have created a keyspace with name Customkeyspace and a column family called Movie in it.
So I was curious to know which tool/Command I have to use to know the size of the keyspace/Column family ?
| To get statistics regarding column families in Cassandra, you can simply run the command:
nodetool cfstats
It reports statistics about tables which include the live data size as well as on disk.
The documentation about this utility for Cassandra 2.1 is available here.
| Cassandra | 27,011,886 | 19 |
I want to select specific fields of a table in cassandra and insert them into another table. I do this in sql server like this:
INSERT INTO Users(name,family)
SELECT name,family FROM Users
How to to this in cassandra-cli or cqlsh?
| COPY keyspace.columnfamily1 (column1, column2,...) TO 'temp.csv';
COPY keyspace.columnfamily2 (column1, column2,...) FROM 'temp.csv';
here give your keyspace(schema-name) and instead of columnfamilyname1 use the table to which you want to copy and in columnfamily2 give the tablename in which you want to copy..
And yes this is solution for CQL,however I have never tried in with CLI.
| Cassandra | 21,363,046 | 19 |
On a given physical node, rows for a given partition key are stored in the order induced by the clustering keys, making the retrieval of rows in that clustering order particularly efficient. http://cassandra.apache.org/doc/cql3/CQL.html#createTableStmt What kind of ordering is induced by clustering keys?
| Suppose your clustering keys are
k1 t1, k2 t2, ..., kn tn
where ki is the ith key name and ti is the ith key type. Then the order data is stored in is lexicographic ordering where each dimension is compared using the comparator for that type.
So (a1, a2, ..., an) < (b1, b2, ..., bn) if a1 < b1 using t1 comparator, or a1=b1 and a2 < b2 using t2 comparator, or (a1=b1 and a2=b2) and a3 < b3 using t3 comparator, etc..
This means that it is efficient to find all rows with a certain k1=a, since the data is stored together. But it is inefficient to find all rows with ki=x for i > 1. In fact, such a query isn't allowed - the only clustering key constraints that are allowed specify zero or more clustering keys, starting from the first with none missing.
For example, consider the schema
create table clustering (
x text,
k1 text,
k2 int,
k3 timestamp,
y text,
primary key (x, k1, k2, k3)
);
If you did the following inserts:
insert into clustering (x, k1, k2, k3, y) values ('x', 'a', 1, '2013-09-10 14:00+0000', '1');
insert into clustering (x, k1, k2, k3, y) values ('x', 'b', 1, '2013-09-10 13:00+0000', '1');
insert into clustering (x, k1, k2, k3, y) values ('x', 'a', 2, '2013-09-10 13:00+0000', '1');
insert into clustering (x, k1, k2, k3, y) values ('x', 'b', 1, '2013-09-10 14:00+0000', '1');
then they are stored in this order on disk (the order select * from clustering where x = 'x' returns):
x | k1 | k2 | k3 | y
---+----+----+--------------------------+---
x | a | 1 | 2013-09-10 14:00:00+0000 | 1
x | a | 2 | 2013-09-10 13:00:00+0000 | 1
x | b | 1 | 2013-09-10 13:00:00+0000 | 1
x | b | 1 | 2013-09-10 14:00:00+0000 | 1
k1 ordering dominates, then k2, then k3.
| Cassandra | 18,635,381 | 19 |
I am trying to understand Cassandra and how to structure my column families (CF) but it's quite hard since I am used to relational databases.
For example if I create simple users CF and I try to insert new row, how can I make an incremental key like in MySQL?
I saw a lot of examples where you would just put the username instead of unique ID and that would make a little sense, but what if I want users to have duplicated usernames?
Also how can I make searches when from what I understand cassandra does not suport > operators, so something like select * from users where something > something2 would not work.
And probably the most important question what about grouping? Would I need to retrieve all data and then filter it with whatever language I am using? I think that would slow down my system a lot.
So basically I need some brief explanation how to get started with Cassanda.
| Your questions are quite general, but let me take a stab at it. First, you need to model your data in terms of your queries. With an RDBMS, you model your data in some normalized form, then optimize later for your specific queries. You cannot do this with Cassandra; you must write your data the way you intend to read it. Often this means writing it more than one way. In general, it helps to completely shed your RDBMS thinking if you want to work effectively with Cassandra.
Regarding keys:
They are used in Cassandra as the unit of distribution across the ring. So your key will get hashed and assigned an "owner" in the ring. Use the RandomPartitioner to guarantee even distribution
Presuming you use RandomPartitioner (you should), keys are not sorted. This means you cannot ask for a range of keys. You can, however, ask for a list of keys in a single query.
Keys are relevant in some models and not in others. If your model requires query-by-key, you can use any unique value that your application is aware of (such as a UUID). Sometimes keys are sentinel values, such as a Unix epoch representing the start of the day. This allows you to hand Cassandra a bunch of known keys, then get a range of data sorted by column (see below).
Regarding query predicates:
You can get ranges of data presuming you model it correctly to answer your queries.
Since columns are written in sorted order, you can query a range from column A to column n with a slice query (which is very fast). You can also use composite columns to abstract this mechanism a bit.
You can use secondary indexes on columns where you have low cardinality--this gives you query-by-value functionality.
You can create your own indexes where the data is sorted the way you need it.
Regarding grouping:
I presume you're referring to creating aggregates. If you need your data in real-time, you'll want to use some external mechanism (like Storm) to track data and constantly update your relevant aggregates into a CF. If you are creating aggregates as part of a batch process, Cassandra has excellent integration with Hadoop, allowing you to write map/reduce jobs in Pig, Hive, or directly in your language of choice.
| Cassandra | 12,709,277 | 19 |
We are starting a new java web-project with Cassandra as the database. The team is very well-experienced with RDBMS/JPA/Hibernate/Spring but very new to the world of NoSQL. We want to start the development with as simple setup as possible.
Hector seems to be the most preferred and popular choice for connecting to Cassandra. But, Netflix has recently offered Astyanax, which has its origins in Hector.
Can anyone who has used both these technologies share their experiences? I am looking for easy setup, good documentation and simple/clean usage.
Suggestions about other api's are also welcome.
| I've tried both and Astyanax is way easier. The API actually makes sense and reflects what your are actually doing. Both Hector or direct Thrift usually results hard to decipher code.
There are some issues yet to be solved in Astyanax (a.o. getColumnByName), but I've decided to build my project using it.
Oh, I used the snapshot version (manually build, since it was not in any maven repo) because of some outdated references.
| Cassandra | 9,481,578 | 19 |
What is the best approach to write unit tests for code that persists data to nosql data store, in our case cassandra?
=> We are using embedded server approach using a utility from git hub (https://github.com/hector-client/hector/blob/master/test/src/main/java/me/prettyprint/hector/testutils/EmbeddedServerHelper.java). However I have been seeing some issues with this. 1) It persists data across multiple test cases making it hard for us to make sure data is different in test cases of a test class. I tried calling cleanUp @After each test case, but that doesn't seem to cleanup data. 2) We are running out of memory as we add more tests and this could be because of 1, but I am not sure yet on that. I currently have 1G heap size to run my build.
=> The other approach I have been thinking is to mock the cassandra storage. But that might leak some issues in the cassandra schema as we often found the above approach catching issues with the way data is stored into cassandra.
Please let me know you thoughts on this and if anyone has used EmbeddedServerHelper and are familiar with the issues I have mentioned.
Just an update. I was able to resolve 2) running out of java heap space issue when running builds by changing the in_memory_compaction_limit_in_mb parameter to 32 in the cassandra.yaml used by the test embedded server. The below link helped me http://www.datastax.com/docs/0.7/configuration/storage_configuration#in-memory-compaction-limit-in-mb. It was 64 and started to fail consistently during compaction.
| We use an embedded cassandra server, and I think that is the best approach when testing cassandra, mocking the cassandra API is too error prone.
EmbeddedServerHelper.cleanup() just removes files rom the file system, but data may still exist in memory.
There is a teardown() method in EmbeddedServerHelper, but I a not sure how effective that is, as cassandra has a lot of static singletons whose state is not cleaned up by teardown()
What we do is we have a method that calls truncate on each column family between tests. That will remove all data.
| Cassandra | 6,612,104 | 19 |
I am looking for an eventually consistent data store and it looks like it may be coming down to Riak or Cassandra. Has anyone got expereinces of a view on this?
| As you probably know, they are both architecturally strongly influenced by Dynamo (eventually consistent, no single points of failure, etc). Both also go beyond Dynamo in providing a "richer than pure K/V" data model -- in Cassandra's case, providing a Bigtable-like ColumnFamily mode, in Riak's, a Document-oriented one. I have seen sane people choose both.
I believe points that favor Cassandra include
speed
support for clusters spanning multiple data centers
big names using it (digg, twitter, facebook, webex, ... -- http://n2.nabble.com/Cassandra-users-survey-tp4040068p4040393.html)
Points that favor Riak include
map/reduce support out of the box
/Cassandra dev, fwiw
| Cassandra | 2,123,507 | 19 |
I am using datastax java driver 3.1.0 to connect to cassandra cluster and my cassandra cluster version is 2.0.10. I am writing asynchronously with QUORUM consistency.
private final ExecutorService executorService = Executors.newFixedThreadPool(10);
public void save(String process, int clientid, long deviceid) {
String sql = "insert into storage (process, clientid, deviceid) values (?, ?, ?)";
try {
BoundStatement bs = CacheStatement.getInstance().getStatement(sql);
bs.setConsistencyLevel(ConsistencyLevel.QUORUM);
bs.setString(0, process);
bs.setInt(1, clientid);
bs.setLong(2, deviceid);
ResultSetFuture future = session.executeAsync(bs);
Futures.addCallback(future, new FutureCallback<ResultSet>() {
@Override
public void onSuccess(ResultSet result) {
logger.logInfo("successfully written");
}
@Override
public void onFailure(Throwable t) {
logger.logError("error= ", t);
}
}, executorService);
} catch (Exception ex) {
logger.logError("error= ", ex);
}
}
My above save method will be called from multiple threads at very fast speed.
Question:
I want to throttle the request to executeAsync method which writes asynchronously into Cassandra. If I write at very high speed than my Cassandra cluster can handle then it will start throwing errors and I want all my writes should go successfully into cassandra without any loss.
I saw this post where solution is to use Semaphore with fixed number of permits. But I am not sure how and what is the best way to implement that. I have never used Semaphor before. This is the logic. Can anyone provide an example with Semaphore basis on my code or if there is any better way/option, then let me know as well.
In the context of writing a dataloader program, you could do something
like the following:
To keep things simple use a Semaphore or some other construct with a fixed number of permits (that will be your maximum number of inflight
requests). Whenever you go to submit a query using executeAsync,
acquire a permit. You should really only need 1 thread (but may want
to introduce a pool of # cpu cores size that does this) that acquires
the permits from the Semaphore and executes queries. It will just
block on acquire until there is an available permit.
Use Futures.addCallback for the future returned from executeAsync. The callback should call Sempahore.release() in both onSuccess and
onFailure cases. By releasing a permit, this should allow your thread
in step 1 to continue and submit the next request.
Also I have seen couple of other post where they have talked about using RingBuffer or Guava RateLimitter so which one is better and I should be using? Below are the options I can think of:
Using Semaphore
Using Ring Buffer
Using Guava Rate Limiter
Can anyone help me with an example of how we can throttle the request or get backpressure for cassandra writes and making sure all writes goes successfully into cassandra?
| Not an authoritative answer but maybe it would be helpful. First you should consider what would you do when query cannot be executed right away. No matter which rate limiting you chose if you get requests at higher rate than you can write to Cassandra eventually you'll get your process clogged with waiting requests. And at that moment you would need to tell your clients to hold their requests for a while ("push back"). E.g. if they are coming via HTTP then response status would be 429 "Too Many Requests". If you generate requests in same process then decide what longest timeout is acceptable. That said if Cassandra cannot keep up then it's time to scale (or tune) it.
Maybe before implementing rate limits it's worth to experiment a little and add artificial delays to your threads before call to save method (e.g. using Thread.sleep(...)) and see whether it is indeed your problem or something else is needed.
Query returning error is back-pressure from Cassandra. But you may choose or implement RetryPolicy to determine when to retry failed queries.
Also you may look at connection pool options (and especially Monitoring and tuning the pool). One can tune number of asynchronous requests per connection. However documentation says that for Cassandra 2.x this parameter caps to 128 and one should not change it (I'd experiment with it though :)
Implementation with Semaphore looks like
/* Share it among all threads or associate with a thread for per-thread limits
Number of permits is to be tuned depending on acceptable load.
*/
final Semaphore queryPermits = new Semaphore(20);
public void save(String process, int clientid, long deviceid) {
....
queryPermits.acquire(); // Blocks until a permit is available
ResultSetFuture future = session.executeAsync(bs);
Futures.addCallback(future, new FutureCallback<ResultSet>() {
@Override
public void onSuccess(ResultSet result) {
queryPermits.release();
logger.logInfo("successfully written");
}
@Override
public void onFailure(Throwable t) {
queryPermits.release(); // Permit should be released in all cases.
logger.logError("error= ", t);
}
}, executorService);
....
}
(In real code I'd create a decorator that would call wrapped method and then release the permits.)
Guava's RateLimiter is similar to semaphore but allows temporary bursts after underutilization periods and limits requests based on timing (not total number of active queries).
However requests will fail for various reasons anyway so probably it's better to have a plan how to retry them (in case of intermittent errors).
It might not be appropriate in your case but I'd try to use some queue or buffer to enqueue requests (e.g. java.util.concurrent.ArrayBlockingQueue). "Buffer full" would mean that clients should wait or give up the request. Buffer would also be used to re-enqueue failed requests. However to be more fair failed requests probably should be put to front of queue so they are retried first. Also one should somehow handle situation when queue is full and there are new failed requests at the same time. A single-threaded worker then would pick requests form queue and send them to Cassandra. Since it should not do much it's unlikely that it becomes a bottle-neck. This worker can also apply it's own rate limits, e.g. based on timing with com.google.common.util.concurrent.RateLimiter.
If one would want to avoid losing messages as much as possible they can put a message broker with persistence (e.g. Kafka) in front of Cassandra. This way incoming messages can survive even long outages of Cassandra. But, I guess, it's overkill in your case.
| Cassandra | 41,049,753 | 18 |
I am using Cassandra and, during startup, Netty prints a warning with a stack trace:
Found Netty's native epoll transport in the classpath, but epoll is not available. Using NIO instead."
The application works normally, but is there a way to fix the warning?
Here is the full stack trace:
16:29:46 WARN com.datastax.driver.core.NettyUtil - Found Netty's native epoll transport in the classpath, but epoll is not available. Using NIO instead.
java.lang.UnsatisfiedLinkError: no netty-transport-native-epoll in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:168)
at io.netty.channel.epoll.Native.<clinit>(Native.java:49)
at io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:30)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at com.datastax.driver.core.NettyUtil.<clinit>(NettyUtil.java:68)
at com.datastax.driver.core.NettyOptions.eventLoopGroup(NettyOptions.java:101)
at com.datastax.driver.core.Connection$Factory.<init>(Connection.java:709)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1386)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:341)
at com.datastax.driver.core.Cluster.connect(Cluster.java:286)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.connect(CassandraCqlSessionFactoryBean.java:100)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.afterPropertiesSet(CassandraCqlSessionFactoryBean.java:94)
at org.springframework.data.cassandra.config.CassandraSessionFactoryBean.afterPropertiesSet(CassandraSessionFactoryBean.java:60)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1642)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1579)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:207)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1128)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1056)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:467)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1128)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1023)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:351)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:108)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1486)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1231)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:207)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1128)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1056)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:566)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:349)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1219)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:751)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:861)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:541)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:761)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:371)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1186)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1175)
...
| If you are running on Linux, you can use the native Linux driver. For instance, if you are using Maven, add a dependency like this:
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>4.0.27.Final</version>
<classifier>linux-x86_64</classifier>
</dependency>
An alternative, you can suppress the warning by setting
-Dcom.datastax.driver.FORCE_NIO=true. As it is only a performance optimization, it is a viable option to ignore it.
For more background, refer to the Netty documentation about native transports.
| Cassandra | 40,746,505 | 18 |
How the Cassandra client chooses the coordinator node?
is the coordinator node stores the data sent by the client before replicating?
| The coordinator is selected by the driver based on the policy you have set. Common policies are DCAwareRoundRobinPolicy and TokenAware Policy.
For DCAwareRoundRobinPolicy, the driver selects the coordinator node based on its round robin policy. See more here: http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/DCAwareRoundRobinPolicy.html
For TokenAwarePolicy, it selects a coordinator node that has the data being queried - to reduce "hops" and latency. More info: http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/TokenAwarePolicy.html
It is a best practice to wrap policies so there is a primary and secondary policy should there be an issue. More information available at the links above.
| Cassandra | 32,867,869 | 18 |
I'm trying to run the following example from here
CREATE TYPE address (
street text,
city text,
zip int
);
CREATE TABLE user_profiles (
login text PRIMARY KEY,
first_name text,
last_name text,
email text,
addresses map<text, address>
);
However, when I try to create the user_profiles table, I get the following error:
InvalidRequest: code=2200 [Invalid query] message="Non-frozen collections are not
allowed inside collections: map<text, address>
Any thoughts on why this could be happening?
| I am running 2.1.8 and I get the same error message. To fix this, you need the frozen keyword:
CREATE TABLE user_profiles (
login text PRIMARY KEY,
first_name text,
last_name text,
email text,
addresses map<text, frozen <address>>
);
Frozen is necessary for UDTs (for now) as it serializes them into a single value. A similar, better example for you to follow might be the one in the User Defined Type documentation. Give that a try.
| Cassandra | 31,953,841 | 18 |
One problem with blob for me is, in java, ByteBuffer (which is mapped to blob in cassandra) is not Serializable hence does not work well with EJBs.
Considering the json is fairly large what would be the better type for storing json in cassandra. Is it text or blob?
Does the size of the json matter when deciding the blob vs json?
If it were any other database like oracle, it's common to use blob/clob. But in Cassandra where each cell can hold as large as 2GB, does it matter?
Please consider this question as the choose between text vs blob for this case, instead of sorting to suggestions regarding whether to use single column for json.
| I don't think there's any benefit for storing the literal JSON data as a BLOB in Cassandra. At best your storage costs are identical, and in general the API's are less convenient in terms of working with BLOB types as they are for working with strings/text.
For instance, if you're using their Java API then in order to store the data as a BLOB using a parameterized PreparedStatement you first need to load it all into a ByteBuffer, for instance by packing your JSON data into an InputStream.
Unless you're dealing with very large JSON snippets that force you to stream your data anyways, that's a fair bit of extra work to get access to the BLOB type. And what would you gain from it? Essentially nothing.
However, I think there's some merit in asking 'Should I store JSON as text, or gzip it and store the compressed data as a BLOB?'.
And the answer to that comes down to how you've configured Cassandra and your table. In particular, as long as you're using Cassandra version 1.1 or later your tables have compression enabled by default. That may be adequate, particularly if your JSON data is fairly uniform across each row.
However, Cassandra's built-in compression is applied table-wide, rather than to individual rows. So you may get a better compression ratio by manually compressing your JSON data before storage, writing the compressed bytes into a ByteBuffer, and then shipping the data into Cassandra as a BLOB.
So it essentially comes down to a tradeoff in terms of storage space vs. programming convenience vs. CPU usage. I would decide the matter as follows:
Is minimizing the amount of storage consumed your biggest concern?
If yes, compress the JSON data and store the compressed bytes as a BLOB;
Otherwise, proceed to #2.
Is Cassandra's built-in compression available and enabled for your table?
If no (and if you can't enable the compression), compress the JSON data and store the compressed bytes as a BLOB;
Otherwise, proceed to #3.
Is the data you'll be storing relatively uniform across each row?
Probably for JSON data the answer is 'yes', in which case you should store the data as text and let Cassandra handle the compression;
Otherwise proceed to #4.
Do you want efficiency, or convenience?
Efficiency; compress the JSON data and store the compressed bytes as a BLOB.
Convenience; compress the JSON data, base64 the compressed data, and then store the base64-encoded data as text.
| Cassandra | 31,339,150 | 18 |
I'm getting this error when starting cassandra after upgrade. Any idea?
# cassandra -f
xss = -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities
-XX:ThreadPriorityPolicy=42 -Xms1920M -Xmx1920M -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss180k
The stack size specified is too small, Specify at least 228k
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
| I have fixed it by editing file /etc/cassandra/cassandra-env.sh
I have changed JVM_OPTS="$JVM_OPTS -Xss180k" to JVM_OPTS="$JVM_OPTS -Xss256k"
and it worked.
Basically the value of parameter Xss determines stack size. As the error indicates, It is too small. Just by increasing Xss will solve the problem. It was 180K before and I have increased to 256K. It can be different in different machines according to the size of database.
| Cassandra | 22,470,628 | 18 |
I need to insert 60GB of data into cassandra per day.
This breaks down into
100 sets of keys
150,000 keys per set
4KB of data per key
In terms of write performance am I better off using
1 row per set with 150,000 keys per row
10 rows per set with 15,000 keys per row
100 rows per set with 1,500 keys per row
1000 rows per set with 150 keys per row
Another variable to consider, my data expires after 24 hours so I am using TTL=86400 to automate expiration
More specific details about my configuration:
CREATE TABLE stuff (
stuff_id text,
stuff_column text,
value blob,
PRIMARY KEY (stuff_id, stuff_column)
) WITH COMPACT STORAGE AND
bloom_filter_fp_chance=0.100000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=39600 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'tombstone_compaction_interval': '43200', 'class': 'LeveledCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
Access pattern details:
The 4KB value is a set of 1000 4 byte floats packed into a string.
A typical request is going to need a random selection of 20 - 60 of those floats.
Initially, those floats are all stored in the same logical row and column. A logical row here represents a set of data at a given time if it were all written to one row with 150,000 columns.
As time passes some of the data is updated, within a logical row within the set of columns, a random set of levels within the packed string will be updated. Instead of updating in place, the new levels are written to a new logical row combined with other new data to avoid rewriting all of the data which is still valid. This leads to fragmentation as multiple rows now need to be accessed to retrieve that set of 20 - 60 values. A request will now typically read from the same column across 1 - 5 different rows.
Test Method
I wrote 5 samples of random data for each configuration and averaged the results. Rates were calculated as (Bytes_written / (time * 10^6)). Time was measured in seconds with millisecond precision. Pycassa was used as the Cassandra interface. The Pycassa batch insert operator was used. Each insert inserts multiple columns to a single row, insert sizes are limited to 12 MB. The queue is flushed at 12MB or less. Sizes do not account for row and column overhead, just data. The data source and data sink are on the same network on different systems.
Write results
Keep in mind there are a number of other variables in play due to the complexity of the Cassandra configuration.
1 row 150,000 keys per row: 14 MBps
10 rows 15,000 keys per row: 15 MBps
100 rows 1,500 keys per row: 18 MBps
1000 rows 150 keys per row: 11 MBps
| The answer depends on what your data retrieval pattern is, and how your data is logically grouped. Broadly, here is what I think:
Wide row (1 row per set): This could be the best solution as it prevents the request from hitting several nodes at once, and with secondary indexing or composite column names, you can quickly filter data to your needs. This is best if you need to access one set of data per request. However, doing too many multigets on wide rows can increase memory pressure on nodes, and degrade performance.
Skinny row (1000 rows per set): On the other hand, a wide row can give rise to read hotspots in the cluster. This is especially true if you need to make a high volume of requests for a subset of data that exists entirely in one wide row. In such a case, a skinny row will distribute your requests more uniformly throughout the cluster, and avoid hotspots. Also, in my experience, "skinnier" rows tend to behave better with multigets.
I would suggest, analyze your data access pattern, and finalize your data model based on that, rather than the other way around.
| Cassandra | 19,039,123 | 18 |
I am trying to remodel a SQL database Cassandra such that, I can find the Cassandra equivalent for the SQL queries. I use CQL 3 and Cassandra v1.2. I modeled the db design in cassandra so that it supports the order by clauses and denormalized tables to support the join operation. However I am at sea when it comes to DISTINCT, SUM() and GROUPBY equvalents
SELECT a1,MAX(b1) FROM demo1 group by a1.
SELECT DISTINCT (a2) FROM demo2 where b2='sea'
SELECT sum(a3), sum(b3) from demo3 where c3='water' and d3='ocean'
This is like a showstopper to my work for past couple of days. Is there a way in Cassandra, that I can model the db schema to support queries of these kind? I cant think of any way in Cassandra . How are such queries be implemented using Cassandra?
I read that a hive layer over Cassandra can possibly make these queries work. I am just wondering if that is the only way that such queries can be supported in Cassandra..? Pls advise on any other possible methods..
| With Cassandra you solve these kinds of problems by doing more work when you insert your data -- which sounds like it would be slow, but Cassandra is designed for fast writes, and you're probably going to read the data many more times than you write it so it makes sense when you consider the whole system.
I can't tell you exactly how to create your tables to model your problem because it will depend a lot on the details. You need to figure a schema that lets you get the data without performing any on-the-fly aggregations. Think about how you would create views for the queries in an RDBMS, and then try to think how you would insert data directly into those views, not into the underlying tables. That's kind of how you model things in Cassandra.
| Cassandra | 17,342,176 | 18 |
I found lazyboy and pycassa - maybe there are others too. I've seen many sites recommending lazyboy. IMHO the project seems dead, see https://www.ohloh.net/p/compare?project_0=pycassa&project_1=lazyboy
So what's the best option for a new project? Thanks.
| The Cassandra project has been recommending that new projects use CQL for a few versions now, and with the advent of CQL 3 in Cassandra 1.1, I'd definitely recommend going right to that. Advantages include a more familiar syntax if you've used SQL before, and a commonality of interface between the different language CQL drivers. CQL is CQL, whether you use it from Java, Python, Ruby, Node.js, or whatever. Drivers don't need to support as much as full Cassandra client libraries, so there is less need for maintenance and less dependence on client authors.
The Python CQL driver is on GitHub: datastax/python-driver. (Previous releases were on Google Code.)
For information on CQL, see Datastax's quite through docs for CQL 2, a post on how to make effective data models with CQL 3, and a post on what's new in CQL 3 overall.
There's also a full reference on CQL 3 which is pending approval into the official Cassandra repo; while it's waiting, you should be able to read it here in pcmanus' github.
All that said, though, if you'd rather not use CQL, Pycassa really is better maintained and ought to have good support for quite some time.
| Cassandra | 10,430,417 | 18 |
I have a cluster with three nodes and I need to remove one node. How can I make sure the data from the node to be removed will be replicated to the two other nodes before I actually remove it? Is this done using snapshots? How should I proceed?
| From the doc
You can take a node out of the cluster with nodetool decommission to a
live node, or nodetool removenode (to any other machine) to remove a
dead one. This will assign the ranges the old node was responsible for
to other nodes, and replicate the appropriate data there. If
decommission is used, the data will stream from the decommissioned
node. If removenode is used, the data will stream from the remaining
replicas.
| Cassandra | 10,306,021 | 18 |
Please excuse any mistakes in terminology. In particular, I am using relational database terms.
There are a number of persistent key-value stores, including CouchDB and Cassandra, along with plenty of other projects.
A typical argument against them is that they do not generally permit atomic transactions across multiple rows or tables. I wonder if there's a general approach would would solve this issue.
Take for example the situation of a set of bank accounts. How do we move money from one bank account to another? If each bank account is a row, we want to update two rows as part of the same transaction, reducing the value in one and increasing the value in another.
One obvious approach is to have a separate table which describes transactions. Then, moving money from one bank account to another consists of simply inserting a new row into this table. We do not store the current balances of either of the two bank accounts and instead rely on summing up all the appropriate rows in the transactions table. It is easy to imagine that this would be far too much work, however; a bank may have millions of transactions a day and an individual bank account may quickly have several thousand 'transactions' associated with it.
A number (all?) of key-value stores will 'roll back' an action if the underlying data has changed since you last grabbed it. Possibly this could be used to simulate atomic transactions, then, as you could then indicate that a particular field is locked. There are some obvious issues with this approach.
Any other ideas? It is entirely possible that my approach is simply incorrect and I have not yet wrapped my brain around the new way of thinking.
| If, taking your example, you want to atomically update the value in a single document (row in relational terminology), you can do so in CouchDB. You will get a conflict error when you try to commit the change if an other contending client has updated the same document since you read it. You will then have to read the new value, update and re-try the commit. There is an indeterminate (possibly infinite if there is a lot of contention) number of times you may have to repeat this process, but you are guaranteed to have a document in the database with an atomically updated balance if your commit ever succeeds.
If you need to update two balances (i.e. a transfer from one account to an other), then you need to use a separate transaction document (effectively another table where rows are transactions) that stores the amount and the two accounts (in and out). This is a common bookkeeping practice, by the way. Since CouchDB computes views only as needed, it is actually still very efficient to compute the current amount in an account from the transactions that list that account. In CouchDB, you would use a map function that emitted the account number as key and the amount of the transaction (positive for incoming, negative for outgoing). Your reduce function would simply sum the values for each key, emitting the same key and total sum. You could then use a view with group=True to get the account balances, keyed by account number.
| Cassandra | 1,093,115 | 18 |
I'm new to Java. While exploring the ways of monitoring Cassandra, I found out(https://cassandra.apache.org/doc/latest/operating/metrics.html) that "Metrics in Cassandra are managed using the Dropwizard Metrics library". However, at several places I've read about Codahale Metrics which has got me confused regarding the difference/relationship between the two.
Are these different libraries doing the same thing or is it that what's called as dropwizard metrics used to be called as Codahale Metrics earlier?
| The Metrics library have changed its package naming with versions as its changed hands in ownership a bit
yammer->codahale->dropwizard
They are all same library but dropwizard is the more up to date version
| Cassandra | 49,557,777 | 17 |
I have an application where the 'natural' partition key for a Cassandra table seems like it would be 'customer'. This is the primary way we want to query the data, we would get good data distribution, etc.
But if there were well over 1 million customers, would that be too many different partitions?
Should I choose a partition key that results in a smaller number of partition keys?
I've looked at a number of the related questions on this topic but none seem to address this particular point.
|
But if there were well over 1 million customers, would that be too many different partitions?
No. The Murmur3Partitioner can handle something like 2^64 (-2^63 to +2^63) partitions. Cassandra is designed to be very good at storing large amounts of data and retrieving by partition key. There are restrictions on the number of columns within a partition (2 billion), but for total number of partitions I think you'll be fine with what you have.
Should I choose a partition key that results in a smaller number of partition keys?
Definitely not. That could cause your partitions to grow too big, and/or develop "hot spots" in your cluster.
The main task behind picking a good partition key, is to find one that (both) offers good data distribution in the cluster, and matches your query patterns. And from what I'm reading, it sounds like you have done exactly that.
| Cassandra | 30,648,479 | 17 |
I wonder why nodetools don't know the percentage of the ring handeld by my node...
I created this keyspace with
CREATE KEYSPACE mykeyspace WITH replication = {'class': 'SimpleStrategy',
'replication_factor': '3'} AND durable_writes = true;
Someone has a clue?
| okay got it. I have to specify a keyspace!
nodetool status mykeyspace
does the trick
| Cassandra | 29,921,809 | 17 |
Theoretically, Cassandra allows up to 2 billion columns in a wide row.
I have heard that in reality up to 50.000 cols/50 MB are fine; 50.000-100.000 cols/100 MB are OK but require some tuning; and that one should never go above 100.000/100 MB columns per row. The reason being that this will put pressure on the heap.
Is there some truth to this?
| In Cassandra, the maximum number of cells (rows x columns) in a single partition is 2 billion.
Additionally, a single column value may not be larger than 2GB, but in practice, "single digits of MB" is a more reasonable limit, since there is no streaming or random access of blob values.
Partitions greater than 100Mb can cause significant pressure on the heap.
| Cassandra | 28,893,365 | 17 |
I know we can use Cassandra's virtual node facility so that we can prevent additional overhead of assigning token (start token) to different nodes of cluster. Instead of that we use num_tokens and its default value is 256.
In what way are these virtual nodes making difference in partitioning? Is Cassandra setting/assigning a token range (max and minimum token) for a particular node?
|
What is virtual nodes?
Prior to Cassandra 1.2, each node was assigned to a specific token range. Now each node can support multiple, non-contiguous token ranges. Instead of a node being responsible for one large range of tokens, it is responsible for many smaller ranges. In this way, one physical node is essentially hosting many smaller "virtual" nodes.
In what way these virtual nodes is making difference in partitioning?
Consider the image in this blog: Virtual nodes in Cassandra 1.2.
Having many smaller token ranges (nodes) on each physical node allows for a more even distribution of data. This becomes evident when you add a physical node to the cluster, in that rebalancing (manually reassigning token ranges) is no longer necessary. As the Virtual Node documentation states, the new node "assumes responsibility for an even portion of data from the other nodes in the cluster."
Cassandra is setting/assigning token range(max and minimum token) for a particular node?
Yes, Cassandra predetermines the size of each virtual node. However, you can control the number of virtual nodes assigned to each physical node. Assume that your physical nodes are all configured for the default of 256 virtual nodes. If you add a new machine with more resources than your current nodes, and you want that machine to handle more load, you could configure it to allow 384 virtual nodes instead. Likewise, a machine with fewer resources could be configured to support a smaller number of virtual nodes.
Edit 20230628
I do not understand the relationship between vnode and partitioner (let's take murmur3).
A VNode's token range is calculated using the Murmur3 algorithm.
A partition key, once created must land on some vnode?
Yes.
How we ensure this vnode will have enough space on a disk?
We don't, but VNodes doesn't change that. As usual it's up to the DBA and Dev teams to work together on appropriately sizing the anticipated compute resource usage up-front. But with more, smaller ranges, generated tokens should be distributed more-evenly.
What if too many partition keys will land on the same vnode?
Then add another node to the cluster. The node add operation will bisect the current node's token ranges and reassign them to other nodes. This is no different than if we weren't using VNodes, although with VNodes there's a much lower chance of this becoming a problem.
The token creation algorithm is different from that of partitioning?
Yes! The token partition algorithm is one of either Murmur3 or MD5 (RandomPartitioner). The creation of Murmur3 tokens is faster than the RandomPartitioner, because the delivered MD5 hash in Java did a lot of other things that we just don't need.
| Cassandra | 25,379,457 | 17 |
Datastax Java driver (cassandra-driver-core 2.0.2) for Cassandra supports PreparedStatements as well as QueryBuilder API. Any specific advantages using one over the other? Disadvantages?
Documentation: http://www.datastax.com/documentation/developer/java-driver/2.0/common/drivers/reference/driverReference_r.html
The above doc does not state any advantages over using QueryBuilder API over PreparedStatements, other than writing queries programmatically, which isn't much of an advantage (in my book).
Please share your thoughts and experiences. Thanks.
| PreparedStatements gives you a performance boost since what you're executing is already stored server-side (assuming you re-use the statements). You just bind new concrete values and re-execute the statement.
The query builder is a fancier way of creating a string statement to be executed as is without requiring any preparation.
From a performance standpoint the first option is fastest, the second and third are identical:
// below prepared statement has already been prepared, we're now just re-using
PreparedStatement ps = session.prepare("SELECT * FROM users WHERE uname=?");
1) session.execute(ps.bind('david');
2) session.execute("SELECT * FROM users WHERE uname=david");
3) session.exectute(QueryBuilder.select()
.all()
.from("users")
.where(QueryBuilder.eq('uname', 'david'))
Not too sure if this is relevant but there's a good example of migrating from string execution of queries build with the query builder to using pre-built prepared statements in this ycsb client.
| Cassandra | 24,924,800 | 17 |
Full-Text search in Cassandra;
I am fairly new to Cassandra, and wish to understand it more properly. I am attempting to perform a Full-Text search in Cassandra, but after some research I have found that there may not be a "simple" approach for this.. and I say maybe because the first page of Google hasn't said much of anything.
So I am trying to understand now instead, what is the best approach here.. This sort of lead me to take make up my own assumptions based on what I've learned so far about Cassandra, that is based on these two principals; a) design your tables based on your queries, rather than the data, and b) more-data is a good thing, as long as it is being used properly.
With that being said, I've come up with a couple of solutions I'd like to share, and also ask that if anyone has a better idea, please fill me on it before I commit to anything unreasonable/naive.
First Solution: Create a Column Family(CF), with two primary keys and an Index like so:
CREATE TABLE "FullTextSearch" (
"PartialText" text,
"TargetIdentifier" uuid,
"CompleteText" text,
"Type" int,
PRIMARY KEY ("PartialText","TargetIdentifier")
);
CREATE INDEX IX_FullTextSearch_Type "keyspace"."FullTextSearch" ("Type");
With the above table, I would need to insert rows for the text "Hello World" as follows:
BATCH APPLY;
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("H",000000000-0000-0000-0000-000000000,"Hello World",1);
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("He",000000000-0000-0000-0000-000000000,"Hello World",1);
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("Hel",000000000-0000-0000-0000-000000000,"Hello World",1);
.....
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("Hello Wor",000000000-0000-0000-0000-000000000,"Hello World",1);
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("Hello Worl",000000000-0000-0000-0000-000000000,"Hello World",1);
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("Hello World",000000000-0000-0000-0000-000000000,"Hello World",1);
.....
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("Wor",000000000-0000-0000-0000-000000000,"Hello World",1);
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("Worl",000000000-0000-0000-0000-000000000,"Hello World",1);
INSERT INTO "FullTextSearch" ("PartialText","TargetIdentifier","CompleteText","Type") VALUES ("World",000000000-0000-0000-0000-000000000,"Hello World",1);
END BATCH;
Basically, the above will satisfy the following wildcards/partialtext "%o W%", "Hello%", "Worl%"; However it will not satisfy partial words such as "%ell%" for "Hello", which I can feel alright about for now..... (OCD sorta kicks in here)
This approach sort of sucks for me because I would now have to delete/re-insert any time a save/name change occurs on the "TargetIdentifier";
The Second Solution, would be very similar only this time making use of wide-columns; where the table might look like:
CREATE TABLE "FullTextSearch" (
"TargetIdentifier" uuid,
"Type" int,
"CompleteText" text,
PRIMARY KEY("TargetIdentifier")
);
and now during a search something like:
SELECT * FROM "FullTextSearch" WHERE "He" = 1;
so that if the column exists, the respective rows are returned;
Third Solution:
similar to the one above, only this time instead of using wide-columns we use a set column such as map for the partial texts, and perform a query like:
SELECT * FROM "FullTextSearch" WHERE "PartialTexts"['He'] = 1;
Anyways, I am all out of ideas, it is late, and I can only hope for a great response! Please, let me know what I should be doing here... am I even on the right path?
| AFAIK Datastax Enterprise Search is the (commercial) successor of Solandra.
Cassandra 2.0 supports so called "custom secondary indexes".
Custom secondary indexes are Java code. Your own implementation has to implement the abstract class org.apache.cassandra.db.index.SecondaryIndex
(See http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/create_index_r.html)
I'm not sure whether implementations exist for Elasticsearch or Solr.
I would not recommend to code all the weird full text search logic like stemming, multiple/exotic language support or even geo spatial stuff.
But SecondaryIndexwould be a good point to start integrating your favorite search engine.
| Cassandra | 24,858,141 | 17 |
In the earlier beta releases of CQL, there was a command I could use to set the read / write consistency of an individual CQL operation. It looked like this:
SELECT * FROM users WHERE state='TX' USING CONSISTENCY QUORUM;
I use CQL3 regularly and have a use-case where I need to be able to perform a read with a higher consistency level than the rest of our application.
I looked through the CQL3 reference and didn't find any mention of any CQL syntax that allows me to change the consistency settings on a per-query basis, unless I'm using cqlsh (not useful for application development.)
How am I supposed to tune the consistency on a per-request basis using CQL3?
| First set the consistency by running command:
CONSISTENCY QUORUM;
and then then run you query:
SELECT * FROM users WHERE state='TX'
At any point you can check the consistency using:
CONSISTENCY;
| Cassandra | 21,442,717 | 17 |
I am using embedded Cassandra. When I shut down and restart my Cassandra service data is lost. I think decent data are not properly flushed into the disk. So I tried using nodetool to flush data manually and check if data are available. But nodetool doesn't seem to work properly for embedded Cassandra service. I get the following error:
c:\vijay\cassandra\bin>nodetool -host 192.168.2.86 -p 7199 drain
Starting NodeTool
Failed to connect to '192.168.2.86:7199': Connection refused: connect
I tried setting jmx properties still I am getting error. I added following lines to my code:
System.setProperty("com.sun.management.jmxremote", "true");
System.setProperty("com.sun.management.jmxremote.port", "7197");
System.setProperty("com.sun.management.jmxremote.authenticate", "false");
System.setProperty("com.sun.management.jmxremote.ssl", "false");
System.setProperty("java.rmi.server.hostname", "my ip");
So, is there any way to manually flush data to Cassandra without using nodetool?
Edit 1:
After hours of trying I am now able to run nodetool (instead of adding jmx configurations to the code I added to Eclipse debug configurations and it worked). I ran drain command now the data is properly flushed to the disk. So now my question is: why isn't data properly flushed? Every time when I restart Cassandra service recent changes are gone.
| How are you stopping and starting the Cassandra server? The call to stopServer on the Cassandra daemon should be flushing any outstanding writes to the commit log. The thread will continue to do some processing even after the method returns, so if you're killing the JVM after stopServer() then you might be preventing the data from being written.
| Cassandra | 20,520,839 | 17 |
I recently started working with Cassandra database. I have installed single node cluster in my local box. And I am working with Cassandra 1.2.3.
I was reading the article on the internet and I found this line-
Cassandra writes are first written to a commit log (for durability),
and then to an in-memory table structure called a memtable. A write is
successful once it is written to the commit log and memory, so there
is very minimal disk I/O at the time of write. Writes are batched in
memory and periodically written to disk to a persistent table
structure called an SSTable (sorted string table).
So to understand the above lines, I wrote a simple program that will write to Cassandra Database using Pelops client. And I was able to insert the data in Cassandra database.
And now I am trying to see how my data was written into commit log and where that commit log file is? And also how SSTables is generated and where I can find that as well in my local box and what it contains also.
I wanted to see these two files so that I can understand more how Cassandra works behind the scenes.
In my cassandra.yaml file, I have something like this
# directories where Cassandra should store data on disk.
data_file_directories:
- S:\Apache Cassandra\apache-cassandra-1.2.3\storage\data
# commit log
commitlog_directory: S:\Apache Cassandra\apache-cassandra-1.2.3\storage\commitlog
# saved caches
saved_caches_directory: S:\Apache Cassandra\apache-cassandra-1.2.3\storage\savedcaches
But when I opened commitLog, first of all it has lot of data so my notepad++ is not able to open it properly and if it gets opened, I cannot see properly because of some encoding or what. And in my data folder, I cannot find out anything?
Meaning this folder is empty for me-
S:\Apache Cassandra\apache-cassandra-1.2.3\storage\data\my_keyspace\users
Is there anything I am missing here? Can anybody explain me how to read commitLog and SSTables files and where I can find these two files? And also what exactly happens behind the scenes whenever I am writing to Cassandra database.
Updated:-
Code I am using to insert into Cassandra Database-
public class MyPelops {
private static final Logger log = Logger.getLogger(MyPelops.class);
public static void main(String[] args) throws Exception {
// -------------------------------------------------------------
// -- Nodes, Pool, Keyspace, Column Family ---------------------
// -------------------------------------------------------------
// A comma separated List of Nodes
String NODES = "localhost";
// Thrift Connection Pool
String THRIFT_CONNECTION_POOL = "Test Cluster";
// Keyspace
String KEYSPACE = "my_keyspace";
// Column Family
String COLUMN_FAMILY = "users";
// -------------------------------------------------------------
// -- Cluster --------------------------------------------------
// -------------------------------------------------------------
Cluster cluster = new Cluster(NODES, 9160);
Pelops.addPool(THRIFT_CONNECTION_POOL, cluster, KEYSPACE);
// -------------------------------------------------------------
// -- Mutator --------------------------------------------------
// -------------------------------------------------------------
Mutator mutator = Pelops.createMutator(THRIFT_CONNECTION_POOL);
log.info("- Write Column -");
mutator.writeColumn(
COLUMN_FAMILY,
"Row1",
new Column().setName(" Name ".getBytes()).setValue(" Test One ".getBytes()).setTimestamp(new Date().getTime()));
mutator.writeColumn(
COLUMN_FAMILY,
"Row1",
new Column().setName(" Work ".getBytes()).setValue(" Engineer ".getBytes()).setTimestamp(new Date().getTime()));
log.info("- Execute -");
mutator.execute(ConsistencyLevel.ONE);
// -------------------------------------------------------------
// -- Selector -------------------------------------------------
// -------------------------------------------------------------
Selector selector = Pelops.createSelector(THRIFT_CONNECTION_POOL);
int columnCount = selector.getColumnCount(COLUMN_FAMILY, "Row1",
ConsistencyLevel.ONE);
System.out.println("- Column Count = " + columnCount);
List<Column> columnList = selector
.getColumnsFromRow(COLUMN_FAMILY, "Row1",
Selector.newColumnsPredicateAll(true, 10),
ConsistencyLevel.ONE);
System.out.println("- Size of Column List = " + columnList.size());
for (Column column : columnList) {
System.out.println("- Column: (" + new String(column.getName()) + ","
+ new String(column.getValue()) + ")");
}
System.out.println("- All Done. Exit -");
System.exit(0);
}
}
Keyspace and Column family that I have created-
create keyspace my_keyspace with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = {replication_factor:1};
use my_keyspace;
create column family users with column_type = 'Standard' and comparator = 'UTF8Type';
| You are almost there in your understanding. However, missing some minute details.
So explaining things in a structured way, cassandra write operation life cycle is divided in these steps
commitlog write
memtable write
sstable write
Cassandra writes are first written to a commit log (for durability), and then to an in-memory table structure called a memtable. A write is said to successful once it is written to the commit log and memory, so there is very minimal disk I/O at the time of write. When ever the memtable runs out of space, i.e when the number of keys exceed certain limit (128 is default) or when it reaches the time duration (cluster clock), it is being stored into sstable, immutable space (This mechanism is called Flushing). Once writes are done on SSTable, then you can see the corresponding datas in the data folder, in your case its S:\Apache Cassandra\apache-cassandra-1.2.3\storage\data. Each SSTable composes of mainly 2 files - Index file and Data file
Index file contains - Bloom filter and Key-Offset pairs
Bloom Filter: A Bloom filter, is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. False positives are possible, but false negatives are not. Cassandra uses bloom filters to save IO when performing a key lookup: each SSTable has a bloom filter associated with it that Cassandra checks before doing any disk seeks, making queries for keys that don't exist almost free
(Key, offset) pairs (points into data file)
Data file contains the actual column data
And regarding commitlog files, these are encrypted files maintained intrinsically by Cassandra, for which you are not able to see anything properly.
UPDATE:
Memtable is an in-memory cache with content stored as key/column (data are sorted by key). Each column-family has a separate Memtable and retrieve column data from the key. So now i hope you are in clear state of mind to understand the fact, why we can't locate them in our disk.
In your case your memtable is not full as memtable thresholds are not bleached yet resulting to no flushing. You can know more about MemtableThresholds here though it is recommended not to touch that Dial.
SSTableStructure:
Your data folder
KEYSPACE
CF
CompressionInfo.db
Data.db
Filter.db
Index.db
Statistics.db
snapshots //if snapshots are taken
For more information Refer sstable
| Cassandra | 15,857,779 | 17 |
I am looking for a code example to retrieve all rows and all columns of a column family. Something like:
SELECT * FROM MyTable
I see that this can be done using a RangeSlicesQuery, but you still have to provide a certain range. And I think you have to specify the column names too. Is there a clean and safe way to do this?
Using Hector 1.0 and Cassandra 1.0.
| Try something like this:
public class Dumper {
private final Cluster cluster;
private final Keyspace keyspace;
public Dumper() {
this.cluster = HFactory.getOrCreateCluster("Name", "hostname");
this.keyspace = HFactory.createKeyspace("Keyspace", cluster, new QuorumAllConsistencyLevelPolicy());
}
public void run() {
int row_count = 100;
RangeSlicesQuery<UUID, String, Long> rangeSlicesQuery = HFactory
.createRangeSlicesQuery(keyspace, UUIDSerializer.get(), StringSerializer.get(), LongSerializer.get())
.setColumnFamily("Column Family")
.setRange(null, null, false, 10)
.setRowCount(row_count);
UUID last_key = null;
while (true) {
rangeSlicesQuery.setKeys(last_key, null);
System.out.println(" > " + last_key);
QueryResult<OrderedRows<UUID, String, Long>> result = rangeSlicesQuery.execute();
OrderedRows<UUID, String, Long> rows = result.get();
Iterator<Row<UUID, String, Long>> rowsIterator = rows.iterator();
// we'll skip this first one, since it is the same as the last one from previous time we executed
if (last_key != null && rowsIterator != null) rowsIterator.next();
while (rowsIterator.hasNext()) {
Row<UUID, String, Long> row = rowsIterator.next();
last_key = row.getKey();
if (row.getColumnSlice().getColumns().isEmpty()) {
continue;
}
System.out.println(row);
}
if (rows.getCount() < row_count)
break;
}
}
public static void main(String[] args) {
new Dumper().run();
}
}
This will page through the column family in pages of 100 rows. It will only fetch 10 columns for each row (you will want to page very long rows too).
This is for a column family with uuids for row keys, strings for column names and longs for values. Hopefully it should be obvious how to change this.
| Cassandra | 8,418,448 | 17 |
this question may have been asked many times, but could not find any suitable answer. Is there any ORM on Rails3 for Cassandra.
I have searched google and found following.
fauna/cassandra - cassandra client for rails
carbonfive/active_column - last updated 13-may-2011
winebarrel/activerecord-cassandra-adapter - last updated 5 months ago
scrum8/cassandrb - last updated 01-mar-2011
NZKoz/cassandra_object - last updated 30-may-2010
astrails/smallrecord - last updated 14-apr-2010
azati/ActiveCassandra - last updated 03-jun-2010
Please help me out in deciding which one I should go with.
Thanks
| Update: February 2013
data-axle/cassandra_object aka gotime-cassandra_object gem
brewster/cequel aka cequel gem
twitter/cassandra - a ruby client for Cassandra
and if you're using Datastax:
jasonmk/datastax_rails - aka datastax_rails gem
| Cassandra | 6,157,760 | 17 |
I am trying to develop application using Hive as the Database, and then I also find noSQL solutions as an alternative to it.
Now decided to develop using Cassandra, my next problem is about what client should I use? which one is better, Hector -- a pure java solutions, or Kundera with JPA like development?
I prefer Hector, but I am curious about Kundera. Is there anyone using Kundera? Which is better?
I'm curious about CQL (Cassandra Query Language). Can it integrate with Hector?
| Hector is slowly moving towards CQL integration. The first steps have been made, but because of the experience of an unstable API, the developers seem to have postponed a new release. The CQL API is rather new, as it should be nearly equivalent to a SQL syntax. I made some basic steps with CRUD operations to verify that data could be written and read via CQL.
Nevertheless, the CQL JAR is not usable out of the box like a standard JDBC driver as of now, and misses some important feature aspects. Having a look at the more or less difficult to understand thrift API and the not really much simpler hector API, I am convinced that CQL will be established as the state-of-the-art access API for Cassandra in version 0.8.1 and 1.0, where thrift will remain the native, raw access for some time.
The competition between both APIs has nothing to do with the decision of Hector. Hector itself provides additional services like failure and connection handling in the cluster. These are features being addressed by neither thrift nor CQL.
I don't really believe in all other O/R mappers, or even those claiming to provide a full-fledged JPA. I cannot imagine how this should work.
| Cassandra | 6,151,078 | 17 |
I would like to start using Cassandra with a node.js deployment, but I can't find a Thrift or Cassandra client for Node.js and/or JavaScript.
Is there one?
Is there a simple means of generating Thrift connections?
Update: The short answer to this question turns out to be no, there is no JS client for Thrift that is compatible with Cassandra.
Further Update: The next release of Cassandra (0.8 at time of writing) is going to have support for an Avro API. There is already node.js module for Avro support.
| Someone made one now:
https://github.com/wadey/node-thrift
Update:
Rackspace released a node cassandra api:
http://code.google.com/a/apache-extras.org/p/cassandra-node/
Update:
They moved it to github:
https://github.com/racker/node-cassandra-client
Update:
There is a CQL driver now too:
https://github.com/simplereach/helenus
Update:
There is a CQL driver, that uses the Cassandra native protocol
https://github.com/jorgebay/node-cassandra-cql
Update:
DataStax released a CQL driver for Cassandra using the native protocol:
https://github.com/datastax/nodejs-driver
| Cassandra | 2,947,470 | 17 |
Anyone out there using Cassandra (http://cassandra.apache.org/) with PHP? What PHP module would you guys recommend to communicate between PHP and Cassandra?
| Although this is an old question, thobb's version of PHPCassa has become a nice standard for PHP development with Apache Cassandra. The link referenced in the accepted question is to the hoan version of PHPCassa, which is not as current (last update was 2 years ago) or robust as the forked version that thobbs maintains: https://github.com/thobbs/phpcassa
Compatible with Cassandra 0.7, 0.8 and 1.x
Optional C extension for improved performance
I'm thoroughly happy with it, and have been for well over a year now. Continual development and you can see contributions being pushed upstream now from other developers.
You'll note that Pandra hasn't had any updates for quite some time too. +1yr
| Cassandra | 2,508,649 | 17 |
We are looking at using Cassandra to store a stream of information coming from various sources.
One issue we are facing is the best way to query between two dates.
For example we will need to retrieve an object between datetime dt1 and datetime dt2.
We are currently considering the created unix timestamp as the key pointing to the actual object then using get_key_range to query to retrieve?
Obviously this wouldn't work if two items have the same timestamp.
Is this the best way to do datetime in noSQL stores in general?
| Cassandra rows can be very large, so consider modeling it as columns in a row rather than rows in a CF; then you can use the column slice operations, which are faster than row slices. If there are no "natural" keys associated with this then you can use daily or hourly keys like "2010/02/08 13:00".
Otherwise, yes, using range queries (get_key_range is deprecated in 0.5; use get_range_slice) is your best option.
| Cassandra | 2,212,279 | 17 |
I created a keyspace and a table(columnfamily) within it.
Let's say "ks.cf"
After entering few hundred thousand rows in the columnfamily cf, I saw the disk usage using df -h.
Then, I dropped the keyspace using the command DROP KEYSPACE ks from cqlsh.
After dropping also, the disk usage remains the same. I also did nodetool compact, but no luck.
Can anyone help me out in configuring these things so that disk usage gets freed up after deleting the data/rows ?
| Ran into this problem recently. After dropping a table a snapshot is made. This snapshot will allow you to roll this back if this was not intended. If you do however want that harddrive space back you need to run:
nodetool -h localhost -p 7199 clearsnapshot
on the appropriate nodes. Additionally you can turn snapshots off with auto_snapshot: false in your cassandra.yml.
edit: spelling/grammar
| Cassandra | 33,104,071 | 16 |
I am trying to learn Cassandra and always find the best way is to start with creating a very simple and small application. Hence I am creating a basic messaging application which will use Cassandra as the back-end. I would like to do the following:
User will create an account with a username, email, and password. The
email and the password can be changed at anytime.
The user can add another user as their contact. The user would add a
contact by searching their username or email. The contacts don't need
to be mutual meaning if I add a user they are my contact, I don't
need to wait for them to accept/approve anything like in Facebook.
A message is sent from one user to another user. The sender needs to
be able to see the messages they sent (ordered by time) and the
messages which were sent to them (ordered by time). When a user opens
the app I need to check the database for any new messages for that
user. I also need to mark if the message has been read.
As I come from the world of relational databases my relational database would look something like this:
UsersTable
username (text)
email (text)
password (text)
time_created (timestamp)
last_loggedIn (timestamp)
------------------------------------------------
ContactsTable
user_i_added (text)
user_added_me (text)
------------------------------------------------
MessagesTable
from_user (text)
to_user (text)
msg_body (text)
metadata (text)
has_been_read (boolean)
message_sent_time (timestamp)
Reading through a couple of Cassandra textbooks I have a thought of how to model the database. My main concern is to model the database in a very efficient manner. Hence I am trying to avoid things such as secondary indexes etc. This is my model so far:
CREATE TABLE users_by_username (
username text PRIMARY KEY,
email text,
password text
timeCreated timestamp
last_loggedin timestamp
)
CREATE TABLE users_by_email (
email text PRIMARY KEY,
username text,
password text
timeCreated timestamp
last_loggedin timestamp
)
To spread data evenly and to read a minimal amount of partitions (hopefully just one) I can lookup a user based on their username or email quickly. The downside of this is obviously I am doubling my data, but the cost of storage is quite cheap so I find it to be a good trade off instead of using secondary indexes. Last logged in will also need to be written in twice but Cassandra is efficent at writes so I believe this is a good tradeoff as well.
For the contacts I can't think of any other way to model this so I modelled it very similar to how I would in a relational database. This is quite a denormalized design I beleive which should be good for performance according to the books I have read?
CREATE TABLE "user_follows" (
follower_username text,
followed_username text,
timeCreated timestamp,
PRIMARY KEY ("follower_username", "followed_username")
);
CREATE TABLE "user_followedBy" (
followed_username text,
follower_username text,
timeCreated timestamp,
PRIMARY KEY ("followed_username", "follower_username")
);
I am stuck on how to create this next part. For messaging I was thinking of this table as it created wide rows which enables ordering of the messages.
I need messaging to answer two questions. It first needs to be able to show the user all the messages they have and also be able to show the user
the messages which are new and are unread. This is a basic model, but am unsure how to make it more efficent?
CREATE TABLE messages (
message_id uuid,
from_user text,
to_user text,
body text,
hasRead boolean,
timeCreated timeuuid,
PRIMARY KEY ((to_user), timeCreated )
) WITH CLUSTERING ORDER BY (timeCreated ASC);
I was also looking at using things such as STATIC columns to 'glue' together the user and messages, as well as SETS to store contact relationships, but from my narrow understanding so far the way I presented is more efficient. I ask if there are any ideas to improve this model's efficiency, if there are better practices do the things I am trying to do, or if there are any hidden problems I can face with this design?
In conclusion, I am trying to model around the queries. If I were using relation databases these would be essentially the queries I am looking to answer:
To Login:
SELECT * FROM USERS WHERE (USERNAME = [MY_USERNAME] OR EMAIL = [MY_EMAIL]) AND PASSWORD = [MY_PASSWORD];
------------------------------------------------------------------------------------------------------------------------
Update user info:
UPDATE USERS (password) SET password = [NEW_PASSWORD] where username = [MY_USERNAME];
UPDATE USERS (email) SET password = [NEW_PASSWORD ] where username = [MY_USERNAME];
------------------------------------------------------------------------------------------------------------------------
To Add contact (If by username):
INSERT INTO followings(following,follower) VALUES([USERNAME_I_WANT_TO_FOLLOW],[MY_USERNAME]);
------------------------------------------------------------------------------------------------------------------------
To Add contact (If by email):
SELECT username FROM users where email = [CONTACTS_EMAIL];
Then application layer sends over another query with the username:
INSERT INTO followings(following,follower) VALUES([USERNAME_I_WANT_TO_FOLLOW],[MY_USERNAME]);
------------------------------------------------------------------------------------------------------------------------
To View contacts:
SELECT following FROM USERS WHERE follower = [MY_USERNAME];
------------------------------------------------------------------------------------------------------------------------
To Send Message:,
INSERT INTO MESSAGES (MSG_ID, FROM, TO, MSG, IS_MSG_NEW) VALUES (uuid, [FROM_USERNAME], [TO_USERNAME], 'MY MSG', true);
------------------------------------------------------------------------------------------------------------------------
To View All Messages (Some pagination type of technique where shows me the 10 recent messages, yet shows which ones are unread):
SELECT * FROM MESSAGES WHERE TO = [MY_USERNAME] LIMIT 10;
------------------------------------------------------------------------------------------------------------------------
Once Message is read:
UPDATE MESSAGES SET IS_MSG_NEW = false WHERE TO = [MY_USERNAME] AND MSG_ID = [MSG_ID];
Cheers
| Yes it's always a struggle to adapt to the limitations of Cassandra when coming from a relational database background. Since we don't yet have the luxury of doing joins in Cassandra, you often want to cram as much as you can into a single table. In your case that would be the users_by_username table.
There are a few features of Cassandra that should allow you to do that.
Since you are new to Cassandra, you could probably use Cassandra 3.0, which is currently in beta release. In 3.0 there is a nice feature called materialized views. This would allow you to have users_by_username as a base table, and create the users_by_email as a materialized view. Then Cassandra will update the view automatically whenever you update the base table.
Another feature that will help you is user defined types (in C* 2.1 and later). Instead of creating separate tables for followers and messages, you can create the structure of those as UDT's, and then in the user table keep lists of those types.
So a simplified view of your schema could be like this (I'm not showing some of the fields like timestamps to keep this simple, but those are easy to add).
First create your UDT's:
CREATE TYPE user_follows (
followed_username text,
street text,
);
CREATE TYPE msg (
from_user text,
body text
);
Next we create your base table:
CREATE TABLE users_by_username (
username text PRIMARY KEY,
email text,
password text,
follows list<frozen<user_follows>>,
followed_by list<frozen<user_follows>>,
new_messages list<frozen<msg>>,
old_messages list<frozen<msg>>
);
Now we create a materialized view partitioned by email:
CREATE MATERIALIZED VIEW users_by_email AS
SELECT username, password, follows, new_messages, old_messages FROM users_by_username
WHERE email IS NOT NULL AND password IS NOT NULL AND follows IS NOT NULL AND new_messages IS NOT NULL
PRIMARY KEY (email, username);
Now let's take it for a spin and see what it can do. Let's create a user:
INSERT INTO users_by_username (username , email , password )
VALUES ( 'someuser', '[email protected]', 'somepassword');
Let the user follow another user:
UPDATE users_by_username SET follows = [{followed_username: 'followme2', street: 'mystreet2'}] + follows
WHERE username = 'someuser';
Let's send the user a message:
UPDATE users_by_username SET new_messages = [{from_user: 'auser', body: 'hi someuser!'}] + new_messages
WHERE username = 'someuser';
Now let's see what's in the table:
SELECT * FROM users_by_username ;
username | email | followed_by | follows | new_messages | old_messages | password
----------+-------------------+-------------+---------------------------------------------------------+----------------------------------------------+--------------+--------------
someuser | [email protected] | null | [{followed_username: 'followme2', street: 'mystreet2'}] | [{from_user: 'auser', body: 'hi someuser!'}] | null | somepassword
Now let's check that our materialized view is working:
SELECT new_messages, old_messages FROM users_by_email WHERE email='[email protected]';
new_messages | old_messages
----------------------------------------------+--------------
[{from_user: 'auser', body: 'hi someuser!'}] | null
Now let's read the email and put it in the old messages:
BEGIN BATCH
DELETE new_messages[0] FROM users_by_username WHERE username='someuser'
UPDATE users_by_username SET old_messages = [{from_user: 'auser', body: 'hi someuser!'}] + old_messages where username = 'someuser'
APPLY BATCH;
SELECT new_messages, old_messages FROM users_by_email WHERE email='[email protected]';
new_messages | old_messages
--------------+----------------------------------------------
null | [{from_user: 'auser', body: 'hi someuser!'}]
So hopefully that gives you some ideas you can use. Have a look at the documentation on collections (i.e. lists, maps, and sets), since those can really help you to keep more information in one table and are sort of like tables within a table.
| Cassandra | 32,447,699 | 16 |
I'm new with cassandra and I met a problem. I created a keyspace demodb and a table users. This table got 3 columns: id (int and primary key), firstname (varchar), name (varchar).
this request send me the good result:
SELECT * FROM demodb.users WHERE id = 3;
but this one:
SELECT * FROM demodb.users WHERE firstname = 'francois';
doesn't work and I get the following error message:
InvalidRequest: code=2200 [Invalid query] message="No secondary indexes on the restricted columns support the provided operators: "
This request also doesn't work:
SELECT * FROM users WHERE firstname = 'francois' ORDER BY id DESC LIMIT 5;
InvalidRequest: code=2200 [Invalid query] message="ORDER BY with 2ndary indexes is not supported."
Thanks in advance.
|
This request also doesn't work:
That's because you are mis-understanding how sort order works in Cassandra. Instead of using a secondary index on firstname, create a table specifically for this query, like this:
CREATE TABLE usersByFirstName (
id int,
firstname text,
lastname text,
PRIMARY KEY (firstname,id));
This query should now work:
SELECT * FROM usersByFirstName WHERE firstname='francois'
ORDER BY id DESC LIMIT 5;
Note, that I have created a compound primary key on firstname and id. This will partition your data on firstname (allowing you to query by it), while also clustering your data by id. By default, your data will be clustered by id in ascending order. To alter this behavior, you can specify a CLUSTERING ORDER in your table creation statement:
WITH CLUSTERING ORDER BY (id DESC)
...and then you won't even need an ORDER BY clause.
I recently wrote an article on how clustering order works in Cassandra (We Shall Have Order). It explains this, and covers some ordering strategies as well.
| Cassandra | 29,793,438 | 16 |
I have recently lowered gc_grace_seconds for a CQL table. I am running LeveledCompactionStrategy. Is it possible for me to force purging of old tombstones from my SSTables?
| TL;DR
Your tombstones will disappear on their own through compaction bit make sure you are running repair or they may come back from the dead.
http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html
Adding some more details:
Tombstones are not immediately available for deletion until both:
1) gc_grace_seconds has expired
2) they meet the requirements configured in tombstone compaction sub-properties
I need to free up disk by expiring tombstones, how do I do this quickly?
1) Run a repair to ensure your tombstones are consistent
2) Decrease gc_grace_seconds for your table (alter table statement)
3) Configure your compaction sub properties to speed up tombstone removal:
Decrease tombstone_compaction_interval and decrease tombstone_threshold, or set unchecked_tombstone_compaction to true to ignore both conditions and collect based purely on gc grace.
Is it working?
You can see statistics about tombstones in nodetool cfstats and by using the sstable metatdata utility found in your tools directory sstablemetadata <sstable filenames>.
| Cassandra | 27,566,615 | 16 |
I understand that this is not possible using an UPDATE.
What I would like to do instead, is migrate all rows with say PK=0 to new rows where PK=1. Are there any simple ways of achieving this?
| For a relatively simple way, you could always do a quick COPY TO/FROM in cqlsh.
Let's say that I have a column family (table) called "emp" for employees.
CREATE TABLE stackoverflow.emp (
id int PRIMARY KEY,
fname text,
lname text,
role text
)
And for the purposes of this example, I have one row in it.
aploetz@cqlsh:stackoverflow> SELECT * FROM emp;
id | fname | lname | role
----+-------+-------+-------------
1 | Angel | Pay | IT Engineer
If I want to re-create Angel with a new id, I can COPY the table's contents TO a .csv file:
aploetz@cqlsh:stackoverflow> COPY stackoverflow.emp TO '/home/aploetz/emp.csv';
1 rows exported in 0.036 seconds.
Now, I'll use my favorite editor to change the id of Angel to 2 in emp.csv. Note, that if you have multiple rows in your file (that don't need to be updated) this is your opportunity to remove them:
2,Angel,Pay,IT Engineer
I'll save the file, and then COPY the updated row back into Cassandra FROM the file:
aploetz@cqlsh:stackoverflow> COPY stackoverflow.emp FROM '/home/aploetz/emp.csv';
1 rows imported in 0.038 seconds.
Now Angel has two rows in the "emp" table.
aploetz@cqlsh:stackoverflow> SELECT * FROM emp;
id | fname | lname | role
----+-------+-------+-------------
1 | Angel | Pay | IT Engineer
2 | Angel | Pay | IT Engineer
(2 rows)
For more information, check the DataStax doc on COPY.
| Cassandra | 27,022,325 | 16 |
When importing a record with a large field inside (longer than 124214 characters) I am getting the error
"field larger than field limit (131072)"
I saw form other posts how to solve this on Python but I don't know if it is possible on CQLSH.
Thanks
| Take a look at this answer:
_csv.Error: field larger than field limit (131072)
You will need to add this solution to the top of the cqlsh file. So after:
import csv
import getpass
csv.field_size_limit(sys.maxsize)
| Cassandra | 24,168,235 | 16 |
I have a Cassandra question. Do you know how Cassandra does updates/increments of counters?
I want to use a storm bolt (CassandraCounterBatchingBolt from storm-contrib repo on github) which writes into cassandra. However, I'm not sure how some of the implementation of the incrementCounterColumn() method works .. and there is also the limitations with cassandra counters (from: http://wiki.apache.org/cassandra/Counters) which makes them useless for my scenario IMHO:
If a write fails unexpectedly (timeout or loss of connection to the coordinator node) the client will not know if the operation has been performed. A retry can result in an over count CASSANDRA-2495.
Counter removal is intrinsically limited. For instance, if you issue very quickly the sequence "increment, remove, increment" it is possible for the removal to be lost
Anyway, here is my scenario:
I update the same counter faster than the updates propagate to other Cassandra nodes.
Example:
Say I have 3 cassandra nodes. The counters on each of these nodes are 0.
Node1:0, node2:0, node3:0
An increment comes: 5 -> Node1:0, node2:0, node3:0
Increment starts at node 2 – still needs to propagate to node1 and node3
Node1:0, node2:5, node3:0
In the meantime, another increment arrives before previous increment
is propagated: 3 -> Node1:0, node2:5, node3:0
Assuming 3 starts at a different node than where 5 started we have:
Node1:3, node2:5, node3:0
Now if 3 gets propagated to the other nodes AS AN INCREMENT and not as a new value
(and the same for 5) then eventually the nodes would all equal 8 and this is what I want.
If 3 overwrites 5 (because it has a later timestamp) this is problematic – not what I want.
Do you know how these updates/increments are handled by Cassandra?
Note, that a read before a write is still susceptible to the same problem depending from which replica node the read executes (Quorum can still fail if propagation is not far along)
I'm also thinking that maybe putting a cache b/w my storm bolt and Cassandra might solve this issue but that's a story for another time.
| Counters in C* have a complex internal representation that avoids most (but not all) problems of counting things in a leaderless distributed system. I like to think of them as sharded counters. A counter consists of a number of sub-counters identified by host ID and a version number. The host that receives the counter operation increments only its own sub-counter, and also increments the version. It then replicates its whole counter state to the other replicas, which merge it with their states. When the counter is read the node handling the read operation determines the counter value by summing up the total of the counts from each host.
On each node a counter increment is just like everything else in Cassandra, just a write. The increment is written to the memtable, and the local value is determined at read time by merging all of the increments from the memtable and all SSTables.
I hope that explanation helps you believe me when I say that you don't have to worry about incrementing counters faster than Cassandra can handle. Since each node keeps its own counter, and never replicates increment operations, there is no possibility of counts getting lost by race conditions like a read-modify-write scenario would introduce. If Cassandra accepts the write, your're pretty much guaranteed that it will count.
What you're not guaranteed, though, is that the count will appear correct at all times unless. If an increment is written to one node but the counter value read from another just after, there is not guarantee that the increment has been replicated, and you also have to consider what would happen during a network partition. This more or less the same with any write in Cassandra, it's in its eventually consistent nature, and it depends on which consistency levels you used for the operations.
There is also the possibility of a lost acknowledgement. If you do an increment and loose the connection to Cassandra before you can get the response back you can't know whether or not your write got though. And when you get the connection back you can't tell either, since you don't know what the count was before you incremented. This is an inherent problem with systems that choose availability over consistency, and the price you pay for many of the other benefits.
Finally, the issue of rapid remove, increment, removes are real, and something you should avoid. The problem is that the increment operation will essentially resurrect the column, and if these operations come close enough to each other they might get the same timestamp. Cassandra is strictly last-write-wins and determines last based on the timestamp of the operation. If two operations have the same time stamp, the "greater" one wins, which means the one which sorts after in a strict byte order. It's real, but I wouldn't worry too much about it unless you're doing very rapid writes and deletes to the same value (which is probably a fault in your data model).
Here's a good guide to the internals of Cassandra's counters: http://www.datastax.com/wp-content/uploads/2011/07/cassandra_sf_counters.pdf
| Cassandra | 20,953,821 | 16 |
I have below table in CQL-
create table test (
employee_id text,
employee_name text,
value text,
last_modified_date timeuuid,
primary key (employee_id)
);
I inserted couple of records in the above table like this which I will be inserting in our actual use case scenario-
insert into test (employee_id, employee_name, value, last_modified_date) values ('1', 'e27', 'some_value', now());
insert into test (employee_id, employee_name, value, last_modified_date) values ('2', 'e27', 'some_new_value', now());
insert into test (employee_id, employee_name, value, last_modified_date) values ('3', 'e27', 'some_again_value', now());
insert into test (employee_id, employee_name, value, last_modified_date) values ('4', 'e28', 'some_values', now());
insert into test (employee_id, employee_name, value, last_modified_date) values ('5', 'e28', 'some_new_values', now());
Now I was doing select query for - give me all the employee_id for employee_name e27.
select employee_id from test where employee_name = 'e27';
And this is the error I am getting -
Bad Request: No indexed columns present in by-columns clause with Equal operator
Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
Is there anything wrong I am doing here?
My use cases are in general -
Give me everything for any of the employee_name?
Give me everything for what has changed in last 5 minutes?
Give me the latest employee_id and value for any of the employee_name?
Give me all the employee_id for any of the employee_name?
I am running Cassandra 1.2.11
| The general rule is simple: "you can only query by columns that are part of the key". As an explanation all other queries would require a complete scan of the tables which might mean a lot of data sifting.
There are things that can modify this rule:
use secondary indexes for columns with low cardinality (more details here)
define multi-column keys (e.g. PRIMARY KEY (col1, col2); which would allow queries like col1 = value1 and col1 = value1 and col2 COND)
use ALLOW FILTERING in queries. This will result in a warning as Cassandra will have to sift through a lot of data and there will be no performance guarantees. For more details see details of ALLOW FILTERING in CQL and this SO thread
| Cassandra | 19,756,368 | 16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.