question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm running the latest build of the Docker Apple Silicon Preview. I created the tutorial container/images and it works fine. When I went to create a custom YAML file and run docker-compose I get the following error when pulling mysql:
ERROR: no matching manifest for linux/arm64/v8 in the manifest list entries
Here is a snippet from my YAMl file:
version: '3'
services:
# Database
db:
image: mysql-server:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: pass
MYSQL_DATABASE: wp
MYSQL_USER: wp
MYSQL_PASSWORD: wp
networks:
- wpsite
I've tried :latest and :8 which result in the same error. It pulls phpmyadmin and wordpress fine.
| Well, technically it will not solve your issue (running MySQL on ARM), but for the time being, you could add platform to your service like:
services:
db:
platform: linux/x86_64
image: mysql:5.7
...
Alternatively, consider using MariaDB, which should work as a drop-in replacement like e.g. this:
services:
db:
image: mariadb:10.5.8
...
Both ways work for me on M1 with the Docker Preview
| MySQL | 65,456,814 | 496 |
Bearing in mind that I'll be performing calculations on lat / long pairs, what datatype is best suited for use with a MySQL database?
| Basically it depends on the precision you need for your locations. Using DOUBLE you'll have a 3.5nm precision. DECIMAL(8,6)/(9,6) goes down to 16cm. FLOAT is 1.7m...
This very interesting table has a more complete list: http://mysql.rjweb.org/doc.php/latlng :
Datatype Bytes Resolution
Deg*100 (SMALLINT) 4 1570 m 1.0 mi Cities
DECIMAL(4,2)/(5,2) 5 1570 m 1.0 mi Cities
SMALLINT scaled 4 682 m 0.4 mi Cities
Deg*10000 (MEDIUMINT) 6 16 m 52 ft Houses/Businesses
DECIMAL(6,4)/(7,4) 7 16 m 52 ft Houses/Businesses
MEDIUMINT scaled 6 2.7 m 8.8 ft
FLOAT 8 1.7 m 5.6 ft
DECIMAL(8,6)/(9,6) 9 16cm 1/2 ft Friends in a mall
Deg*10000000 (INT) 8 16mm 5/8 in Marbles
DOUBLE 16 3.5nm ... Fleas on a dog
| MySQL | 159,255 | 496 |
I'm interested in learning some (ideally) database agnostic ways of selecting the nth row from a database table. It would also be interesting to see how this can be achieved using the native functionality of the following databases:
SQL Server
MySQL
PostgreSQL
SQLite
Oracle
I am currently doing something like the following in SQL Server 2005, but I'd be interested in seeing other's more agnostic approaches:
WITH Ordered AS (
SELECT ROW_NUMBER() OVER (ORDER BY OrderID) AS RowNumber, OrderID, OrderDate
FROM Orders)
SELECT *
FROM Ordered
WHERE RowNumber = 1000000
Credit for the above SQL: Firoz Ansari's Weblog
Update: See Troels Arvin's answer regarding the SQL standard. Troels, have you got any links we can cite?
| There are ways of doing this in optional parts of the standard, but a lot of databases support their own way of doing it.
A really good site that talks about this and other things is http://troels.arvin.dk/db/rdbms/#select-limit.
Basically, PostgreSQL and MySQL supports the non-standard:
SELECT...
LIMIT y OFFSET x
Oracle, DB2 and MSSQL supports the standard windowing functions:
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY key ASC) AS rownumber,
columns
FROM tablename
) AS foo
WHERE rownumber <= n
(which I just copied from the site linked above since I never use those DBs)
Update: As of PostgreSQL 8.4 the standard windowing functions are supported, so expect the second example to work for PostgreSQL as well.
Update: SQLite added window functions support in version 3.25.0 on 2018-09-15 so both forms also work in SQLite.
| MySQL | 16,568 | 495 |
I need to add multiple columns to a table but position the columns after a column called lastname.
I have tried this:
ALTER TABLE `users` ADD COLUMN
(
`count` smallint(6) NOT NULL,
`log` varchar(12) NOT NULL,
`status` int(10) unsigned NOT NULL
)
AFTER `lastname`;
I get this error:
You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near ') AFTER lastname' at line 7
How can I use AFTER in a query like this?
| Try this
ALTER TABLE users
ADD COLUMN `count` SMALLINT(6) NOT NULL AFTER `lastname`,
ADD COLUMN `log` VARCHAR(12) NOT NULL AFTER `count`,
ADD COLUMN `status` INT(10) UNSIGNED NOT NULL AFTER `log`;
check the syntax
| MySQL | 17,541,312 | 494 |
I asked a question and got this reply which helped.
UPDATE TABLE_A a JOIN TABLE_B b
ON a.join_col = b.join_col AND a.column_a = b.column_b
SET a.column_c = a.column_c + 1
Now I am looking to do this if there are three tables involved something like this.
UPDATE tableC c JOIN tableB b JOIN tableA a
My question is basically... is it possible to do three table joins on an UPDATE statement? And what is the correct syntax for it?
Do I do the following?
JOIN tableB, tableA
JOIN tableB JOIN tableA
| The answer is yes, you can.
Try it like this:
UPDATE TABLE_A a
JOIN TABLE_B b ON a.join_col = b.join_col AND a.column_a = b.column_b
JOIN TABLE_C c ON [condition]
SET a.column_c = a.column_c + 1
For a general update join:
UPDATE TABLEA a
JOIN TABLEB b ON a.join_colA = b.join_colB
SET a.columnToUpdate = [something]
| MySQL | 15,209,414 | 493 |
I am trying to rename a column in MySQL community server 5.5.27 using this SQL expression:
ALTER TABLE table_name RENAME COLUMN old_col_name TO new_col_name;
I also tried
ALTER TABLE table_name RENAME old_col_name TO new_col_name;
But it says:
Error: check the Manual that corresponds to your MySQL server version
| Use this query:
ALTER TABLE tableName CHANGE oldcolname newcolname datatype(length);
The RENAME function is used in Oracle databases.
ALTER TABLE tableName RENAME COLUMN oldcolname TO newcolname datatype(length);
@lad2025 mentions it below, but I thought it'd be nice to add what he said. Thank you @lad2025!
You can use the RENAME COLUMN in MySQL 8.0 to rename any column you need renamed.
ALTER TABLE table_name RENAME COLUMN old_col_name TO new_col_name;
ALTER TABLE Syntax:
RENAME COLUMN:
Can change a column name but not its definition.
More convenient than CHANGE to rename a column without changing its definition.
| MySQL | 30,290,880 | 489 |
When we create a table in MySQL with a VARCHAR column, we have to set the length for it. But for TEXT type we don't have to provide the length.
What are the differences between VARCHAR and TEXT?
| TL;DR
TEXT
fixed max size of 65535 characters (you cannot limit the max size)
takes 2 + c bytes of disk space, where c is the length of the stored string.
cannot be (fully) part of an index. One would need to specify a prefix length.
VARCHAR(M)
variable max size of M characters
M needs to be between 1 and 65535
takes 1 + c bytes (for M ≤ 255) or 2 + c (for 256 ≤ M ≤ 65535) bytes of disk space where c is the length of the stored string
can be part of an index
More Details
TEXT has a fixed max size of 2¹⁶-1 = 65535 characters.
VARCHAR has a variable max size M up to M = 2¹⁶-1.
So you cannot choose the size of TEXT but you can for a VARCHAR.
The other difference is, that you cannot put an index (except for a fulltext index) on a TEXT column.
So if you want to have an index on the column, you have to use VARCHAR. But notice that the length of an index is also limited, so if your VARCHAR column is too long you have to use only the first few characters of the VARCHAR column in your index (See the documentation for CREATE INDEX).
But you also want to use VARCHAR, if you know that the maximum length of the possible input string is only M, e.g. a phone number or a name or something like this. Then you can use VARCHAR(30) instead of TINYTEXT or TEXT and if someone tries to save the text of all three "Lord of the Ring" books in your phone number column you only store the first 30 characters :)
Edit: If the text you want to store in the database is longer than 65535 characters, you have to choose MEDIUMTEXT or LONGTEXT, but be careful: MEDIUMTEXT stores strings up to 16 MB, LONGTEXT up to 4 GB. If you use LONGTEXT and get the data via PHP (at least if you use mysqli without store_result), you maybe get a memory allocation error, because PHP tries to allocate 4 GB of memory to be sure the whole string can be buffered. This maybe also happens in other languages than PHP.
However, you should always check the input (Is it too long? Does it contain strange code?) before storing it in the database.
Notice: For both types, the required disk space depends only on the length of the stored string and not on the maximum length.
E.g. if you use the charset latin1 and store the text "Test" in VARCHAR(30), VARCHAR(100) and TINYTEXT, it always requires 5 bytes (1 byte to store the length of the string and 1 byte for each character). If you store the same text in a VARCHAR(2000) or a TEXT column, it would also require the same space, but, in this case, it would be 6 bytes (2 bytes to store the string length and 1 byte for each character).
For more information have a look at the documentation.
Finally, I want to add a notice, that both, TEXT and VARCHAR are variable length data types, and so they most likely minimize the space you need to store the data. But this comes with a trade-off for performance. If you need better performance, you have to use a fixed length type like CHAR. You can read more about this here.
| MySQL | 25,300,821 | 485 |
My MySQL database contains several tables using different storage engines
(specifically myisam and innodb). How can I find out which tables are
using which engine?
| SHOW TABLE STATUS WHERE Name = 'xxx'
This will give you (among other things) an Engine column, which is what you want.
| MySQL | 213,543 | 481 |
I have a datetime column in MySQL.
How can I convert it to the display as mm/dd/yy H:M (AM/PM) using PHP?
| If you're looking for a way to normalize a date into MySQL format, use the following
$phpdate = strtotime( $mysqldate );
$mysqldate = date( 'Y-m-d H:i:s', $phpdate );
The line $phpdate = strtotime( $mysqldate ) accepts a string and performs a series of heuristics to turn that string into a unix timestamp.
The line $mysqldate = date( 'Y-m-d H:i:s', $phpdate ) uses that timestamp and PHP's date function to turn that timestamp back into MySQL's standard date format.
(Editor Note: This answer is here because of an original question with confusing wording, and the general Google usefulness this answer provided even if it didnt' directly answer the question that now exists)
| MySQL | 136,782 | 481 |
How would I delete all duplicate data from a MySQL Table?
For example, with the following data:
SELECT * FROM names;
+----+--------+
| id | name |
+----+--------+
| 1 | google |
| 2 | yahoo |
| 3 | msn |
| 4 | google |
| 5 | google |
| 6 | yahoo |
+----+--------+
I would use SELECT DISTINCT name FROM names; if it were a SELECT query.
How would I do this with DELETE to only remove duplicates and keep just one record of each?
|
Editor warning: This solution is computationally inefficient and may bring down your connection for a large table.
NB - You need to do this first on a test copy of your table!
When I did it, I found that unless I also included AND n1.id <> n2.id, it deleted every row in the table.
If you want to keep the row with the lowest id value:
DELETE n1 FROM names n1, names n2 WHERE n1.id > n2.id AND n1.name = n2.name
If you want to keep the row with the highest id value:
DELETE n1 FROM names n1, names n2 WHERE n1.id < n2.id AND n1.name = n2.name
I used this method in MySQL 5.1
Not sure about other versions.
Update: Since people Googling for removing duplicates end up here
Although the OP's question is about DELETE, please be advised that using INSERT and DISTINCT is much faster. For a database with 8 million rows, the below query took 13 minutes, while using DELETE, it took more than 2 hours and yet didn't complete.
INSERT INTO tempTableName(cellId,attributeId,entityRowId,value)
SELECT DISTINCT cellId,attributeId,entityRowId,value
FROM tableName;
| MySQL | 4,685,173 | 477 |
My problem started off with me not being able to log in as root any more on my mysql install. I was attempting to run mysql without passwords turned on... but whenever I ran the command
# mysqld_safe --skip-grant-tables &
I would never get the prompt back. I was trying to follow these instructions to recover the password.
The screen just looks like this:
root@jj-SFF-PC:/usr/bin# mysqld_safe --skip-grant-tables
120816 11:40:53 mysqld_safe Logging to syslog.
120816 11:40:53 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
and I don't get a prompt to start typing the SQL commands to reset the password.
When I kill it by pressing CTRL + C, I get the following message:
error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
If I retry the command and leave it long enough, I do get the following series of messages:
root@jj-SFF-PC:/run/mysqld# 120816 13:15:02 mysqld_safe Logging to syslog.
120816 13:15:02 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
120816 13:16:42 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
[1]+ Done mysqld_safe --skip-grant-tables
root@jj-SFF-PC:/run/mysqld#
But then if I try to log in as root by doing:
# mysql -u root
I get the following error message:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
I checked and /var/run/mysqld/mysqld.sock file doesn't not exist. The folder does, but not the file.
Also, I don't know if this helps or not, but I ran find / -name mysqld and it came up with:
/var/run/mysqld - folder
/usr/sbin/mysqld - file
/run/mysqld - folder
I don't know if this is normal or not. But I'm including this info just in case it helps.
I finally decided to uninstall and reinstall mysql.
apt-get remove mysql-server
apt-get remove mysql-client
apt-get remove mysql-common
apt-get remove phpmyadmin
After reinstalling all packages again in the same order as above, during the phpmyadmin install, I got the same error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
So I tried again to uninstall/reinstall. This time, after I uninstalled the packages, I also manually renamed all mysql files and directories to mysql.bad in their respective locations.
/var/lib/mysql
/var/lib/mysql/mysql
/var/log/mysql
/usr/lib/perl5/DBD/mysql
/usr/lib/perl5/auto/DBD/mysql
/usr/lib/mysql
/usr/bin/mysql
/usr/share/mysql
/usr/share/dbconfig-common/internal/mysql
/etc/init.d/mysql
/etc/apparmor.d/abstractions/mysql
/etc/mysql
Then I tried to reinstall mysql-server and mysql-client again. But I've noticed that it doesn't prompt me for a password. Isn't it supposed to ask for an admin password?
| Try this command,
sudo service mysql start
| MySQL | 11,990,708 | 474 |
Some background:
I have a Java 1.6 webapp running on Tomcat 7. The database is MySQL 5.5. Previously, I was using Mysql JDBC driver 5.1.23 to connect to the DB. Everything worked. I recently upgraded to Mysql JDBC driver 5.1.33. After the upgrade, Tomcat would throw this error when starting the app.
WARNING: Unexpected exception resolving reference
java.sql.SQLException: The server timezone value 'UTC' is unrecognized or represents
more than one timezone. You must configure either the server or JDBC driver (via
the serverTimezone configuration property) to use a more specifc timezone value if
you want to utilize timezone support.
Why is this happening?
| Apparently, to get version 5.1.33 of MySQL JDBC driver to work with UTC time zone, one has to specify the serverTimezone explicitly in the connection string.
jdbc:mysql://localhost/db?useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC
| MySQL | 26,515,700 | 470 |
The DynamoDB Wikipedia article says that DynamoDB is a "key-value" database. However, calling it a "key-value" database completely misses an extremely fundamental feature of DynamoDB, that of the sort key: Keys have two parts (partition key and sort key) and items with the same partition key can be efficiently retrieved together sorted by the sort key.
Cassandra also has exactly the same sorting-items-inside-a-partition feature (which it calls "clustering key"), and the Cassandra Wikipedia article uses the term wide column store to describe it. However, while this term "wide column" is better than "key-value", it is still somewhat inappropriate because it describes the more general situation where an item can have a very large number of unrelated columns - not necessarily a sorted list of separate items.
So my question is whether there is a more appropriate term that can describe the data model of a database like DynamoDB and Cassandra - databases which like a key-value store can efficiently retrieve items for individual keys, but can also efficiently retrieve items sorted by the key or just a part of it (DynamoDB's sort key or Cassandra's clustering key).
| Before CQL was introduced, Cassandra adhered more strictly the wide column store data model, where you only had rows identified by a row key and containing sorted key/value columns. With the introduction of CQL, rows became known as partitions and columns could optionally be grouped in to logical rows via clustering keys.
Even until Cassandra 3.0, CQL was simply an abstraction on top of the original thrift data model and there was no concept of CQL rows within the storage engine. They were just a sorted set of columns with a compound key consisting of the concatenated values of the clustering keys. More details are given in this article. Now there is native support for CQL in the storage engine, which allows CQL data models to be stored more efficiently.
However, if you think of a CQL row as a logical grouping of columns within the same partition, Cassandra still could be considered a wide column store. In any case, there isn't, to my knowledge, another well established term to describe this kind of database.
| Cassandra | 60,798,118 | 15 |
I have two procesess each process do
1) connect oracle db read a specific table
2) form dataframe and process it.
3) save the df to cassandra.
If I am running both process parallelly , both try to read from oracle
and I am getting below error while second process read the data
ERROR ValsProcessor2: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange SinglePartition
+- *(1) HashAggregate(keys=[], functions=[partial_count(1)], output=[count#290L])
+- *(1) Scan JDBCRelation((SELECT * FROM BM_VALS WHERE ROWNUM <= 10) T) [numPartitions=2] [] PushedFilters: [], ReadSchema: struct<>
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:371)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:150)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:294)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
at com.snp.processors.BenchmarkModelValsProcessor2.process(BenchmarkModelValsProcessor2.scala:43)
at com.snp.utils.Utils$$anonfun$getAllDefinedProcessors$2.apply(Utils.scala:28)
at com.snp.utils.Utils$$anonfun$getAllDefinedProcessors$2.apply(Utils.scala:28)
at com.sp.MigrationDriver$$anonfun$main$2$$anonfun$apply$1.apply(MigrationDriver.scala:78)
at com.sp.MigrationDriver$$anonfun$main$2$$anonfun$apply$1.apply(MigrationDriver.scala:78)
at scala.Option.map(Option.scala:146)
at com.sp.MigrationDriver$$anonfun$main$2.apply(MigrationDriver.scala:75)
at com.sp.MigrationDriver$$anonfun$main$2.apply(MigrationDriver.scala:74)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.MapLike$DefaultKeySet.foreach(MapLike.scala:174)
at com.sp.MigrationDriver$.main(MigrationDriver.scala:74)
at com.sp.MigrationDriver.main(MigrationDriver.scala)
Caused by: java.lang.NullPointerException
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$.needToCopyObjectsBeforeShuffle(ShuffleExchangeExec.scala:163)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$.prepareShuffleDependency(ShuffleExchangeExec.scala:300)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:91)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:128)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
... 37 more
What am i doing wrong here ? How to fix this ?
| I was closing the sparkSession in finally block in the first processor/called class. I moved it out of the processor and placed inside the calling class which solved the issue.
| Cassandra | 53,011,256 | 15 |
When I try to start Cassandra after patching my OS, I get this error:
Exception (java.lang.AbstractMethodError) encountered during startup: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
at javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
at org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
at org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:476)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:59
at com.datastax.bdp.DseModule.main(DseModule.java:93)
ERROR [main] 2018-01-17 13:18:03,330 CassandraDaemon.java:705 - Exception encountered during startup
java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
Does anyone know why, with no other changes, I'm running into this error now?
| This seems to relate to an upgrade to the JDK to 8u161 which was released 2 days ago.
A ticket has been opened on the Cassandra Jira
There is no published work-around that I can find. You might have to go back to an earlier version of the JDK or wait for Cassandra 3.11.2 which fixes the issue.
Edit: Its worth pointing out that this has now been resolved in 3.11.2 which has been released, so you can simply upgrade to this version to resolve the problem.
| Cassandra | 48,328,661 | 15 |
I'm trying to run Cassandra in a docker container and connect to it from my Mac (the host) but I keep getting Connection refused errors.
The docker command:
=> docker run --rm --name cassandra -d cassandra:3.11 -p 9042:9042
=> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ecc9dcd8647 cassandra:3.11 "/docker-entrypoin..." 33 minutes ago Up 33 minutes 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp cassandra
=> cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1':
error(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error:
Connection refused")})
If I'm executing bash shell in the instance:
=> docker exec -it cassandra bash
I can run the cqlsh and connect to cassandra locally.
What am I missing?
| Port is still not exposed outside
Try this
docker run -p 9042:9042 --rm --name cassandra -d cassandra:3.11
Do docker ps you should see something like this
0.0.0.0:9042->9042/tcp
For more info : https://docs.docker.com/engine/reference/commandline/run/
| Cassandra | 47,672,400 | 15 |
I have come to this dilemma that I cannot choose what solution is going to be better for me. I have a very large table (couple of 100GBs) and couple of smaller (couple of GBs). In order to create my data pipeline in Spark and use spark ML I need to join these tables and do couple of GroupBy (aggregate) operations. Those operations were really slow for me so I chose to do one of these two:
Use Cassandra and use indexing to speed the GoupBy operations.
Use Parquet and Partitioning based on the layout of the data.
I can say that Parquet partitioning works faster and more scalable with less memory overhead that Cassandra uses. So the question is this:
If developer infers and understands the data layout and the way it is going to be used, wouldn't it better for just use Parquet since you will have more control over it? Why should I pay the price for the overhead that Cassandra causes?
| Cassandra is also a good solution for analytics use cases, but in another way. Before you model your keyspaces, you have to know how you need to read the data. You can also use where and range queries, but in a hard restricted way. Sometimes you will hate this restriction, but there are reasons for these restrictions. Cassandra is not like Mysql. In MySQL the performance is not a key feature. It's more about flexibility and consistency. Cassandra is a high performance write/read database. Better in write than in read. Cassandra has also a linear scalability.
Okay, a bit about your use case: Parquet is the better option for you. This is why:
You aggregate raw data on really large and not splitted datasets
Your Spark ML Job sounds like a scheduled, not long-running job. (onces a week, day?)
This fits more in the use cases of Parquet. Parquet is a solution for ad-hoc analysis, filter analysis stuff. Parquet is really nice if you need to run a query 1 or 2 times a month. Parquet is also a nice solution if a marketing guy wants to know one thing and the response time is not so important. Simply and short:
Use Cassandra if you know the queries.
Use Cassandra if a query will be used in a daily business
Use Cassandra if Realtime matters (I talk about a maximum of 30 seconds latency, from, customer makes an action and I can see the result in my dashboard)
Use Parquet if Realtime doesn't matter
Use Parquet if the query will not perform 100x a day.
Use Parquet if you want to do batch processing stuff
| Cassandra | 37,806,066 | 15 |
I am trying to configure Cassandra Datastax Community Edition for remote connection on windows,
Cassandra Server is installed on a Windows 7 PC, With the local CQLSH it connects perfectly to the local server.
But when i try to connect with CQLSH from another PC in the same Network, i get this error message:
Connection error: ('Unable to connect to any servers', {'MYHOST':
error(10061, "Tried connecting to [('HOST_IP', 9042)]. Last error: No
connection could be made because the target machine actively refused
it")})
So i am wondering how to configure correctly (what changes should i make on cassandra.yaml config file) the Cassandra server to allow remote connections.
Thank you in advance!
| How about this:
Make these changes in the cassandra.yaml config file:
start_rpc: true
rpc_address: 0.0.0.0
broadcast_rpc_address: [node-ip]
listen_address: [node-ip]
seed_provider:
- class_name: ...
- seeds: "[node-ip]"
reference: https://gist.github.com/andykuszyk/7644f334586e8ce29eaf8b93ec6418c4
| Cassandra | 36,133,127 | 15 |
I have a RHEL 7.0 server running Cassandra 2.2.3 which I tried to upgrade to 3.0. When I run yum update it showed me there is a new version of Cassandra for update, and upgraded the server to 2.2.4-1, but not 3.0.
Now if I search yum for dsc30 I can find it, and presumably I can install it too, but why the automated upgrade doesn't happen from 2.2 to 3.0?
I've got a lot of data on my server and don't want to experiment on it. I had another test server which was running Ubuntu 14.04 and that one upgraded from 2.2 to 3.0 just fine, but on RHEL my server can't find upgrade to 3.0
Thanks
| It's a different application and you can have 2.* and 3.* in parallel. Use
yum install dsc30
to install version 3.
If you want to upgrade your current installation then follow the steps described here
| Cassandra | 34,355,949 | 15 |
I cannot cqlsh to remote host
./cqlsh xx.xx.x.xxx 9042
Connection error: ('Unable to connect to any servers', {'10.101.33.163':
ConnectionException(u'Did not get expected SupportedMessage response;
instead, got: <ErrorMessage code=0000 [Server error]
message="io.netty.handler.codec.DecoderException:
org.apache.cassandra.transport.ProtocolException: Invalid or unsupported
protocol version: 4">',)})
I am using cqlsh 5.0.1 and python 2.7.10
./cqlsh --version
cqlsh 5.0.1
python -V
Python 2.7.10
I am on mac and used the instructions from http://www.datastax.com/2012/01/working-with-apache-cassandra-on-mac-os-x to download cassandra.
Cassandra on my local is 2.2.1(as I understand from the zip file) and it appears like cassandra on remote host is NOT 2.2.1 (I assume it is either 2.0 or 2.1). Without definitively knowing what the version is on remote host, how can I try to connect to cassandra on remote host
| 1) Make sure the service is running:
$ ps aux | grep cassandra
Example:
106 7387 5.1 70.9 2019816 1454636 ? SLl Sep02 16:39 /usr/lib/jvm/java-7-oracle/jre//bin/java -Ddse.system_cpu_cores=2 -Ddse.system_memory_in_mb=2003 -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader -Ddse.system_cpu_cores=2 -Ddse.system_memory_in_mb=2003 -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader -ea -javaagen...
2) Make sure you are using the correct IP by checking the server config:
$ ifconfig
Example:
eth1 Link encap:Ethernet HWaddr 08:00:27:a6:4e:46
inet addr:192.168.56.10 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fea6:4e46/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
3) Ensure you can connect to that IP from the server you are on:
$ ssh [email protected]
4) Check the node's status and also confirm it shows the same IP:
$nodetool status
5) run the command to connect with the IP (only specify port if you are not using the default):
$ cqlsh xxx.xxx.xx.xx
| Cassandra | 32,364,969 | 15 |
I'm logging in with on Ubuntu 14.10 on Cassandra 2.0.8 with Java 1.7.0_60-b19
cqlsh -u cassandra -p cassandra
I'm running:
CREATE USER a WITH PASSWORD 'a' NOSUPERUSER;
I'm getting the error:
Bad Request: Only superusers are allowed to perform CREATE USER queries
The problem with reasoning - I am logged in as the superuser.
My question is: If I'm logged into cqlsh as the Cassandra user, why am I told that I'm not the superuser?
| You need to enable PasswordAuthenticator in cassandra.yaml file.
To enable PasswordAuthenticator you need to change authenticator property in cassandra.yaml
Change
authenticator: AllowAllAuthenticator
to
authenticator: PasswordAuthenticator
After that login with following command and then you will be able to add new user
cqlsh -u cassandra -p cassandra
| Cassandra | 24,219,953 | 15 |
I have gone though this article and here is the schema I have got from it. This is helpful for my application for maintaining statuses of a user, but how can I extend this to maintain one to one chat archive and relations between users, relations mean people belong to specific group for me. I am new to this and need an approach for this.
Requirements :
I want to store messages between user-user in a table.
Whenever a user want to load messages by a user. I want to retrieve them back and send it to user.
I want to retrieve all the messages from different users to the user when user has requested.
And also want to store class of users. I mean for example user1 and user2 belong to "family" user3, user4, user1 belong to friends etc... This group can be custom name given by the user.
This is what I have tried so far:
CREATE TABLE chatarchive (
chat_id uuid PRIMARY KEY,
username text,
body text
)
CREATE TABLE chatseries (
username text,
time timeuuid,
chat_id uuid,
PRIMARY KEY (username, time)
) WITH CLUSTERING ORDER BY (time ASC)
CREATE TABLE chattimeline (
to text,
username text,
time timeuuid,
chat_id uuid,
PRIMARY KEY (username, time)
) WITH CLUSTERING ORDER BY (time ASC)
Below is the schema that I currently have:
CREATE TABLE users (
username text PRIMARY KEY,
password text
)
CREATE TABLE friends (
username text,
friend text,
since timestamp,
PRIMARY KEY (username, friend)
)
CREATE TABLE followers (
username text,
follower text,
since timestamp,
PRIMARY KEY (username, follower)
)
CREATE TABLE tweets (
tweet_id uuid PRIMARY KEY,
username text,
body text
)
CREATE TABLE userline (
username text,
time timeuuid,
tweet_id uuid,
PRIMARY KEY (username, time)
) WITH CLUSTERING ORDER BY (time DESC)
CREATE TABLE timeline (
username text,
time timeuuid,
tweet_id uuid,
PRIMARY KEY (username, time)
) WITH CLUSTERING ORDER BY (time DESC)
| With C* you need to store data in the way you'll use it.
So let's see how this would look like for this case:
I want to store messages between user-user in a table.
Whenever a user want to load messages by a user. I want to retrieve them back and send it to user.
CREATE TABLE chat_messages (
message_id uuid,
from_user text,
to_user text,
body text,
class text,
time timeuuid,
PRIMARY KEY ((from_user, to_user), time)
) WITH CLUSTERING ORDER BY (time ASC);
This will allow you to retrieve a timeline of messages between two users. Note that a composite primary key is used so that wide rows are created for each pair of users.
SELECT * FROM chat_messages WHERE from_user = 'mike' AND to_user = 'john' ORDER BY time DESC ;
I want to retrieve all the messages from different users to the user when user has requested.
CREATE INDEX chat_messages_to_user ON chat_messages (to_user);
This allows you to do:
SELECT * FROM chat_messages WHERE to_user = 'john';
And also want to store class of users. I mean for example user1 and user2 belong to "family" user3, user4, user1 belong to friends etc... This group can be custom name given by the user.
CREATE INDEX chat_messages_class ON chat_messages (class);
This will allow you to do:
SELECT * FROM chat_messages WHERE class = 'family';
Note that in this kind of database, DENORMALIZED DATA IS A GOOD PRACTICE. This means that using the name of the class again and again is not a bad practice.
Also note that I haven't used a 'chat_id' nor a 'chats' table. We could easily add this but I feel that your use case didn't require it as it has been put forward. In general, you cannot do joins in C*. So, using a chat id would imply two queries.
EDIT: Secondary indexes are inefficient. A materialised view will be a better implementation with C* 3.0
| Cassandra | 24,176,883 | 15 |
I have full access to the Cassandra installation files and a PasswordAuthenticator configured in cassandra.yaml. What do I have to do to reset admin user's password that has been lost, while keeping the existing databases intact?
| The hash has changed for Cassandra 2.1:
Switch to authenticator: AllowAllAuthenticator
Restart cassandra
UPDATE system_auth.credentials SET salted_hash = '$2a$10$H46haNkcbxlbamyj0OYZr.v4e5L08WTiQ1scrTs9Q3NYy.6B..x4O' WHERE username='cassandra';
Switch back to authenticator: PasswordAuthenticator
Restart cassandra
Login as cassandra/cassandra
CREATE USER and ALTER USER to your heart's content.
| Cassandra | 18,398,987 | 15 |
I've cobbled together the below code that doesn't do anything complex -- just creates a byte[] variable, writes it into a blob field in Cassandra (v1.2, via the new Datastax CQL library), then reads it back out again.
When I put it in it's 3 elements long, and when I read it back it's 84 elements long...! This means the thing I'm actually trying to do (serialize Java objects) fails with an org.apache.commons.lang.SerializationException: java.io.StreamCorruptedException: invalid stream header: 81000008 error when trying to deserialize again.
Here's some sample code that demonstrates my problem:
import java.nio.ByteBuffer;
import org.apache.commons.lang.SerializationUtils;
import com.datastax.driver.core.BoundStatement;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Host;
import com.datastax.driver.core.Metadata;
import com.datastax.driver.core.PreparedStatement;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.Session;
public class TestCassandraSerialization {
private Cluster cluster;
private Session session;
public TestCassandraSerialization(String node) {
connect(node);
}
private void connect(String node) {
cluster = Cluster.builder().addContactPoint(node).build();
Metadata metadata = cluster.getMetadata();
System.out.printf("Connected to %s\n", metadata.getClusterName());
for (Host host: metadata.getAllHosts()) {
System.out.printf("Datacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack());
}
session = cluster.connect();
}
public void setUp() {
session.execute("CREATE KEYSPACE test_serialization WITH replication = {'class':'SimpleStrategy', 'replication_factor':1};");
session.execute("CREATE TABLE test_serialization.test_table (id text PRIMARY KEY, data blob)");
}
public void tearDown() {
session.execute("DROP KEYSPACE test_serialization");
}
public void insertIntoTable(String key, byte[] data) {
PreparedStatement statement = session.prepare("INSERT INTO test_serialization.test_table (id,data) VALUES (?, ?)");
BoundStatement boundStatement = new BoundStatement(statement);
session.execute(boundStatement.bind(key,ByteBuffer.wrap(data)));
}
public byte[] readFromTable(String key) {
String q1 = "SELECT * FROM test_serialization.test_table WHERE id = '"+key+"';";
ResultSet results = session.execute(q1);
for (Row row : results) {
ByteBuffer data = row.getBytes("data");
return data.array();
}
return null;
}
public static boolean compareByteArrays(byte[] one, byte[] two) {
if (one.length > two.length) {
byte[] foo = one;
one = two;
two = foo;
}
// so now two is definitely the longer array
for (int i=0; i<one.length; i++) {
//System.out.printf("%d: %s\t%s\n", i, one[i], two[i]);
if (one[i] != two[i]) {
return false;
}
}
return true;
}
public static void main(String[] args) {
TestCassandraSerialization tester = new TestCassandraSerialization("localhost");
try {
tester.setUp();
byte[] dataIn = new byte[]{1,2,3};
tester.insertIntoTable("123", dataIn);
byte[] dataOut = tester.readFromTable("123");
System.out.println(dataIn);
System.out.println(dataOut);
System.out.println(dataIn.length); // prints "3"
System.out.println(dataOut.length); // prints "84"
System.out.println(compareByteArrays(dataIn, dataOut)); // prints false
String toSave = "Hello, world!";
dataIn = SerializationUtils.serialize(toSave);
tester.insertIntoTable("toSave", dataIn);
dataOut = tester.readFromTable("toSave");
System.out.println(dataIn.length); // prints "20"
System.out.println(dataOut.length); // prints "104"
// The below throws org.apache.commons.lang.SerializationException: java.io.StreamCorruptedException: invalid stream header: 81000008
String hasLoaded = (String) SerializationUtils.deserialize(dataOut);
System.out.println(hasLoaded);
} finally {
tester.tearDown();
}
}
}
It looks like the right stuff makes it into the database:
cqlsh:flight_cache> select * from test_serialization.test_table;
id | data
--------+--------------------------------------------
123 | 0x010203
toSave | 0xaced000574000d48656c6c6f2c20776f726c6421
cqlsh:flight_cache>
So it looks like an error when reading, rather than writing, the binary data. Can anyone give me any pointers as to what I'm doing wrong?
| The problem is almost certainly because the array returned by ByteBuffer.array() is the full backing array, but the data may only be contained within a portion of it.
The valid data that is being returned starts at ByteBuffer.arrayOffset() and is of length ByteBuffer.remaining(). To get a byte array containing just the valid data use this code in readFromTable:
byte[] result = new byte[data.remaining()];
data.get(result);
then your data is in result and you can return that.
| Cassandra | 17,282,361 | 15 |
I'm developing a little web application for studying Apache Cassandra and Java EE 6.
Cassandra version is 1.1.6.
Have a problem driving me mad...
I created a table with a counter (using cqlsh v. 3.0.0)
CREATE TABLE test (
author varchar PRIMARY KEY,
tot counter
)
and put some values this way:
update test set tot = tot +1 where author = 'myAuthor';
the column family is perfectly updated
author | tot
----------+-----
myAuthor | 1
BUT, if you try to delete this row and then update again (with the same key), then nothing happens! The table is no more updated and I can't figure why: it seems to me that once you used a key than you can't use it anymore.
I looked for clues in datasax documentation (http://www.datastax.com/docs/1.1/references/cql/cql_lexicon) but didn't manage to find a solution.
Can someone help me?
Thank in advance
| Cassandra has some strict limits on deleting counters. You cannot really delete a counter and then use it again in any short period of time. From the Cassandra wiki:
Counter removal is intrinsically limited. For instance, if you issue very quickly the sequence "increment, remove, increment" it is possible for the removal to be lost (if for some reason the remove happens to be the last received messages). Hence, removal of counters is provided for definitive removal only, that is when the deleted counter is not increment afterwards. This holds for row deletion too: if you delete a row of counters, incrementing any counter in that row (that existed before the deletion) will result in an undetermined behavior. Note that if you need to reset a counter, one option (that is unfortunately not concurrent safe) could be to read its value and add -value.
| Cassandra | 13,653,681 | 15 |
For operations monitoring of my application, I am looking for something similar to the commonly used "SQL connection validation" query
SELECT 1;
in Cassandra, using the Hector driver. I have tried things like looking at Cluster.getKnownPoolHosts() and .getConnectionManager().getActivePools(). But it seems that their status is not continuously updated, only when I actually try to access Cassandra with a query.
I'd like my health check to be independent of any keyspaces or user CFs that need to exist, so just running a "dummy" query seems difficult (against what?). And of course it shouldn't take a lot of memory or generate any significant load.
Can I force Hector somehow to update its connection pool status without running a real query?
(BTW: CQL doesn't even accept "SELECT 1" as a valid query.)
| With CQL3, I'm using the following query:
SELECT now() FROM system.local;
It would be nice to get rid of the FROM clause altogther to make this generic, in case the user does not have access to the system keyspace or local column family for some reason. But as with the other answers, at least this should not give false positives.
| Cassandra | 10,246,287 | 15 |
I just wanted to know if there is a fundamental difference between hbase, cassandra, couchdb and monogodb ? In other words, are they all competing in the exact same market and trying to solve the exact same problems. Or they fit best in different scenarios?
All this comes to the question, what should I chose when. Matter of taste?
Thanks,
Federico
| Those are some long answers from @Bohzo. (but they are good links)
The truth is, they're "kind of" competing. But they definitely have different strengths and weaknesses and they definitely don't all solve the same problems.
For example Couch and Mongo both provide Map-Reduce engines as part of the main package. HBase is (basically) a layer over top of Hadoop, so you also get M-R via Hadoop. Cassandra is highly focused on being a Key-Value store and has plug-ins to "layer" Hadoop over top (so you can map-reduce).
Some of the DBs provide MVCC (Multi-version concurrency control). Mongo does not.
All of these DBs are intended to scale horizontally, but they do it in different ways. All of these DBs are also trying to provide flexibility in different ways. Flexible document sizes or REST APIs or high redundancy or ease of use, they're all making different trade-offs.
So to your question: In other words, are they all competing in the exact same market and trying to solve the exact same problems?
Yes: they're all trying to solve the issue of database-scalability and performance.
No: they're definitely making different sets of trade-offs.
What should you start with?
Man, that's a tough question. I work for a large company pushing tons of data and we've been through a few years. We tried Cassandra at one point a couple of years ago and it couldn't handle the load. We're using Hadoop everywhere, but it definitely has a steep learning curve and it hasn't worked out in some of our environments. More recently we've tried to do Cassandra + Hadoop, but it turned out to be a lot of configuration work.
Personally, my department is moving several things to MongoDB. Our reasons for this are honestly just simplicity.
Setting up Mongo on a linux box takes minutes and doesn't require root access or a change to the file system or anything fancy. There are no crazy config files or java recompiles required. So from that perspective, Mongo has been the easiest "gateway drug" for getting people on to KV/Document stores.
| Cassandra | 3,652,310 | 15 |
We are looking at using a NoSQL database system for a large project. Currently, we have read a bit about MongoDB and Cassandra, though we have absolutely no experience with either. We are very proficient with traditional relational databases like MySQL and Microsoft SQL, but the NoSQL (key/value store) is a new paradigm for us.
So basically, which NoSQL database do you guys recommend for our use?
We do both heavy writes and reads. Basically we have tens of thousands of devices that are reporting:
device_id (int), latitude (decimal), longitude (decimal), date/time (datetime), heading char(2), speed (int)
Every minute. So, at peak times we need to be able to process hundreds of writes a second.
Then, we also have users, that are querying this information in the form of, give me all messages from device_id 1234 for the last day, or last week. Also, users do other querying like, give me all messages from device_1234 where speed is greater than 50 and date is today.
So, our initial thoughts are that MongoDB or Cassandra are going to allow us to scale this much easier then using a traditional database.
A document or value in MongoDB or Cassandra for us, might look like:
{
device_id: 1234,
location: [-118.12719739973545, 33.859012351859946],
datetime: 1282274060,
heading: "N",
speed: 34
}
Which system do you guys recommend? Thanks greatly.
| MongoDB has built-in support for geospatial indexes: http://www.mongodb.org/display/DOCS/Geospatial+Indexing
As an example to find the 10 closest devices to that location you can just do
db.devices.find({location: {$near: [-118.12719739973545, 33.859012351859946]}}).limit(10)
| Cassandra | 3,527,956 | 15 |
Can someone explain how MapReduce works with Cassandra .6? I've read through the word count example, but I don't quite follow what's happening on the Cassandra end vs. the "client" end.
https://svn.apache.org/repos/asf/cassandra/trunk/contrib/word_count/
For instance, let's say I'm using Python and Pycassa, how would I load in a new map reduce function, and then call it? Does my map reduce function have to be java that's installed on the cassandra server? If so, how do I call it from Pycassa?
There's also mention of Pig making this all easier, but I'm a complete Hadoop noob, so that didn't really help.
Your answer can use Thrift or whatever, I just mentioned Pycassa to denote the client side. I'm just trying to understand the difference between what runs in the Cassandra cluster vs. the actual server making the requests.
| From what I've heard (and from here), the way that a developer writes a MapReduce program that uses Cassandra as the data source is as follows. You write a regular MapReduce program (the example you linked to is for the pure-Java version) and the jars that are now available provide a CustomInputFormat that allows the input source to be Cassandra (instead of the default, which is Hadoop).
If you're using Pycassa I'd say you're out of luck until either (1) the maintainer of that project adds support for MapReduce or (2) you throw some Python functions together that write up a Java MapReduce program and run it. The latter is definitely a bit of a hack but would get you up and going.
| Cassandra | 2,734,005 | 15 |
EDIT: Although yukim's workaround does work, I found that by downgrading to JDK 8u251 vs 8u261, the sigar lib works correctly.
Windows 10 x64 Pro
Cassandra 3.11.7
NOTE: I have JDK 11.0.7 as my main JDK, so I override JAVA_HOME and PATH in the batch file for Cassandra.
Opened admin prompt and...
java -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
python --version
Python 3.8.5
EDIT #1 switched to Python 2.7.18 and that at least made cqlsh start and error out on no server where on 3.8.5 it wasn't even running.
echo %JAVA_HOME%
c:\progra~1\java\jdk1.8.0_261
When I run cassandra.bat, I get:
Detected powershell execution permissions. Running with enhanced startup scripts.
*---------------------------------------------------------------------*
*---------------------------------------------------------------------*
WARNING! Automatic page file configuration detected.
It is recommended that you disable swap when running Cassandra
for performance and stability reasons.
*---------------------------------------------------------------------*
*---------------------------------------------------------------------*
*---------------------------------------------------------------------*
*---------------------------------------------------------------------*
WARNING! Detected a power profile other than High Performance.
Performance of this node will suffer.
Modify conf\cassandra.env.ps1 to suppress this warning.
*---------------------------------------------------------------------*
*---------------------------------------------------------------------*
C:\Program Files\apache-cassandra-3.11.7\bin>CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
CompilerOracle: dontinline org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.advanceAllocatingFrom (Lorg/apache/cassandra/db/commitlog/CommitLogSegment;)V
CompilerOracle: dontinline org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
CompilerOracle: dontinline org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
CompilerOracle: inline org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes (JJIJ[J)V
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt (JI)[B
INFO [main] 2020-07-28 16:36:17,701 YamlConfigurationLoader.java:89 - Configuration location: file:/C:/Program%20Files/apache-cassandra-3.11.7/conf/cassandra.yaml
INFO [main] 2020-07-28 16:36:18,108 Config.java:534 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=null; broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; check_for_duplicate_rows_during_compaction=true; check_for_duplicate_rows_during_reads=true; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=null; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@235834f2; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=true; enable_sasi_indexes=true; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=SimpleSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_flush_in_batches_legacy=true; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_concurrent_requests_in_bytes=-1; native_transport_max_concurrent_requests_in_bytes_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_negotiable_protocol_version=-2147483648; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=256; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; repair_session_max_tree_depth=18; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=null; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=127.0.0.1}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; snapshot_on_duplicate_row_detection=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@5656be13; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO [main] 2020-07-28 16:36:18,110 DatabaseDescriptor.java:381 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2020-07-28 16:36:18,113 DatabaseDescriptor.java:439 - Global memtable on-heap threshold is enabled at 2018MB
INFO [main] 2020-07-28 16:36:18,114 DatabaseDescriptor.java:443 - Global memtable off-heap threshold is enabled at 2018MB
INFO [main] 2020-07-28 16:36:18,249 RateBasedBackPressure.java:123 - Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000.
INFO [main] 2020-07-28 16:36:18,250 DatabaseDescriptor.java:773 - Back-pressure is disabled with strategy org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}.
INFO [main] 2020-07-28 16:36:18,384 JMXServerUtils.java:252 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi
INFO [main] 2020-07-28 16:36:18,391 CassandraDaemon.java:490 - Hostname: W-2UA8232KLJ-0
INFO [main] 2020-07-28 16:36:18,391 CassandraDaemon.java:497 - JVM vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.8.0_261
INFO [main] 2020-07-28 16:36:18,393 CassandraDaemon.java:498 - Heap size: 7.883GiB/7.883GiB
INFO [main] 2020-07-28 16:36:18,393 CassandraDaemon.java:503 - Code Cache Non-heap memory: init = 2555904(2496K) used = 5162240(5041K) committed = 5177344(5056K) max = 251658240(245760K)
INFO [main] 2020-07-28 16:36:18,394 CassandraDaemon.java:503 - Metaspace Non-heap memory: init = 0(0K) used = 19777368(19313K) committed = 20316160(19840K) max = -1(-1K)
INFO [main] 2020-07-28 16:36:18,394 CassandraDaemon.java:503 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2361872(2306K) committed = 2490368(2432K) max = 1073741824(1048576K)
INFO [main] 2020-07-28 16:36:18,395 CassandraDaemon.java:503 - Par Eden Space Heap memory: init = 1006632960(983040K) used = 201345160(196626K) committed = 1006632960(983040K) max = 1006632960(983040K)
INFO [main] 2020-07-28 16:36:18,397 CassandraDaemon.java:503 - Par Survivor Space Heap memory: init = 125829120(122880K) used = 0(0K) committed = 125829120(122880K) max = 125829120(122880K)
INFO [main] 2020-07-28 16:36:18,397 CassandraDaemon.java:503 - CMS Old Gen Heap memory: init = 7331643392(7159808K) used = 0(0K) committed = 7331643392(7159808K) max = 7331643392(7159808K)
INFO [main] 2020-07-28 16:36:18,398 CassandraDaemon.java:505 - Classpath: C:\Program Files\apache-cassandra-3.11.7\conf;C:/Program Files/apache-cassandra-3.11.7/lib/airline-0.6.jar;C:/Program Files/apache-cassandra-3.11.7/lib/antlr-runtime-3.5.2.jar;C:/Program Files/apache-cassandra-3.11.7/lib/apache-cassandra-3.11.7.jar;C:/Program Files/apache-cassandra-3.11.7/lib/apache-cassandra-thrift-3.11.7.jar;C:/Program Files/apache-cassandra-3.11.7/lib/asm-5.0.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/caffeine-2.2.6.jar;C:/Program Files/apache-cassandra-3.11.7/lib/cassandra-driver-core-3.0.1-shaded.jar;C:/Program Files/apache-cassandra-3.11.7/lib/commons-cli-1.1.jar;C:/Program Files/apache-cassandra-3.11.7/lib/commons-codec-1.9.jar;C:/Program Files/apache-cassandra-3.11.7/lib/commons-lang3-3.1.jar;C:/Program Files/apache-cassandra-3.11.7/lib/commons-math3-3.2.jar;C:/Program Files/apache-cassandra-3.11.7/lib/compress-lzf-0.8.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/concurrent-trees-2.4.0.jar;C:/Program Files/apache-cassandra-3.11.7/lib/concurrentlinkedhashmap-lru-1.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/disruptor-3.0.1.jar;C:/Program Files/apache-cassandra-3.11.7/lib/ecj-4.4.2.jar;C:/Program Files/apache-cassandra-3.11.7/lib/guava-18.0.jar;C:/Program Files/apache-cassandra-3.11.7/lib/HdrHistogram-2.1.9.jar;C:/Program Files/apache-cassandra-3.11.7/lib/high-scale-lib-1.0.6.jar;C:/Program Files/apache-cassandra-3.11.7/lib/hppc-0.5.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jackson-annotations-2.9.10.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jackson-core-2.9.10.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jackson-databind-2.9.10.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jamm-0.3.0.jar;C:/Program Files/apache-cassandra-3.11.7/lib/javax.inject.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jbcrypt-0.3m.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jcl-over-slf4j-1.7.7.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jctools-core-1.2.1.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jflex-1.6.0.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jna-4.2.2.jar;C:/Program Files/apache-cassandra-3.11.7/lib/joda-time-2.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/json-simple-1.1.jar;C:/Program Files/apache-cassandra-3.11.7/lib/jstackjunit-0.0.1.jar;C:/Program Files/apache-cassandra-3.11.7/lib/libthrift-0.9.2.jar;C:/Program Files/apache-cassandra-3.11.7/lib/log4j-over-slf4j-1.7.7.jar;C:/Program Files/apache-cassandra-3.11.7/lib/logback-classic-1.1.3.jar;C:/Program Files/apache-cassandra-3.11.7/lib/logback-core-1.1.3.jar;C:/Program Files/apache-cassandra-3.11.7/lib/lz4-1.3.0.jar;C:/Program Files/apache-cassandra-3.11.7/lib/metrics-core-3.1.5.jar;C:/Program Files/apache-cassandra-3.11.7/lib/metrics-jvm-3.1.5.jar;C:/Program Files/apache-cassandra-3.11.7/lib/metrics-logback-3.1.5.jar;C:/Program Files/apache-cassandra-3.11.7/lib/netty-all-4.0.44.Final.jar;C:/Program Files/apache-cassandra-3.11.7/lib/ohc-core-0.4.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/ohc-core-j8-0.4.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/reporter-config-base-3.0.3.jar;C:/Program Files/apache-cassandra-3.11.7/lib/reporter-config3-3.0.3.jar;C:/Program Files/apache-cassandra-3.11.7/lib/sigar-1.6.4.jar;C:/Program Files/apache-cassandra-3.11.7/lib/slf4j-api-1.7.7.jar;C:/Program Files/apache-cassandra-3.11.7/lib/snakeyaml-1.11.jar;C:/Program Files/apache-cassandra-3.11.7/lib/snappy-java-1.1.1.7.jar;C:/Program Files/apache-cassandra-3.11.7/lib/snowball-stemmer-1.3.0.581.1.jar;C:/Program Files/apache-cassandra-3.11.7/lib/ST4-4.0.8.jar;C:/Program Files/apache-cassandra-3.11.7/lib/stream-2.5.2.jar;C:/Program Files/apache-cassandra-3.11.7/lib/thrift-server-0.3.7.jar;C:\Program Files\apache-cassandra-3.11.7\build\classes\main;C:\Program Files\apache-cassandra-3.11.7\build\classes\thrift;C:\Program Files\apache-cassandra-3.11.7\lib\jamm-0.3.0.jar
INFO [main] 2020-07-28 16:36:18,399 CassandraDaemon.java:507 - JVM Arguments: [-Dcassandra, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=C:\Program Files\apache-cassandra-3.11.7\logs, -Dcassandra.storagedir=C:\Program Files\apache-cassandra-3.11.7\data, -Xloggc:C:\Program Files\apache-cassandra-3.11.7/logs/gc.log, -ea, -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:StringTableSize=1000003, -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB, -XX:+ResizeTLAB, -XX:+UseNUMA, -XX:+PerfDisableSharedMem, -Djava.net.preferIPv4Stack=true, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:CMSWaitDuration=10000, -XX:+CMSParallelInitialMarkEnabled, -XX:+CMSEdenChunksRecordAlways, -XX:+CMSClassUnloadingEnabled, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, -Xms8192M, -Xmx8192M, -Xmn1200M, -XX:+UseCondCardMark, -Djava.library.path=C:\Program Files\apache-cassandra-3.11.7\lib\sigar-bin, -XX:CompileCommandFile=C:\Program Files\apache-cassandra-3.11.7\conf\hotspot_compiler, -javaagent:C:\Program Files\apache-cassandra-3.11.7\lib\jamm-0.3.0.jar, -XX:OnOutOfMemoryError=taskkill /F /PID %p, -Dcassandra.jmx.local.port=7199]
WARN [main] 2020-07-28 16:36:18,405 StartupChecks.java:169 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO [main] 2020-07-28 16:36:18,410 SigarLibrary.java:44 - Initializing SIGAR library
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000010014ed4, pid=19812, tid=0x000000000000481c
#
# JRE version: Java(TM) SE Runtime Environment (8.0_261-b12) (build 1.8.0_261-b12)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.261-b12 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# C [sigar-amd64-winnt.dll+0x14ed4]
#
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Program Files\apache-cassandra-3.11.7\bin\hs_err_pid19812.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Any ideas?
EDIT #2 per request here is the crash dump log (posting a few interesting snippets due to post limit):
Register to memory mapping:
RAX=0x0000000070641c60 is an unknown value
RBX={method} {0x0000014170463c88} 'getFileSystemListNative' '()[Lorg/hyperic/sigar/FileSystem;' in 'org/hyperic/sigar/Sigar'
RCX=0x000001414f753688 is an unknown value
RDX=0x00000028ab53e5b8 is pointing into the stack for thread: 0x000001414f753490
RSP=0x00000028ab53e3e0 is pointing into the stack for thread: 0x000001414f753490
RBP=0x00000028ab53e598 is pointing into the stack for thread: 0x000001414f753490
RSI={method} {0x000001416ccc0488} '<init>' '()V' in 'java/lang/Object'
RDI=0x0000000000118e98 is an unknown value
R8 =0x0000000000000032 is an unknown value
R9 =
[error occurred during error reporting (printing register info), id 0xc0000005]
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j org.hyperic.sigar.Sigar.getFileSystemListNative()[Lorg/hyperic/sigar/FileSystem;+0
j org.hyperic.sigar.Sigar.getFileSystemList()[Lorg/hyperic/sigar/FileSystem;+1
j org.hyperic.sigar.Sigar.getFileSystemMap()Lorg/hyperic/sigar/FileSystemMap;+19
j org.apache.cassandra.utils.SigarLibrary.<init>()V+79
j org.apache.cassandra.utils.SigarLibrary.<clinit>()V+4
v ~StubRoutines::call_stub
j org.apache.cassandra.service.StartupChecks$7.execute()V+0
j org.apache.cassandra.service.StartupChecks.verify()V+30
j org.apache.cassandra.service.CassandraDaemon.setup()V+41
j org.apache.cassandra.service.CassandraDaemon.activate()V+46
j org.apache.cassandra.service.CassandraDaemon.main([Ljava/lang/String;)V+3
v ~StubRoutines::call_stub
Deoptimization events (10 events):
Event: 2.609 Thread 0x000001414f753490 Uncommon trap: reason=unstable_if action=reinterpret pc=0x000001415132849c method=java.lang.CharacterDataLatin1.digit(II)I @ 82
Event: 2.610 Thread 0x000001414f753490 Uncommon trap: reason=unstable_if action=reinterpret pc=0x000001415132a514 method=java.lang.CharacterDataLatin1.digit(II)I @ 82
Event: 2.616 Thread 0x000001414f753490 Uncommon trap: reason=unstable_if action=reinterpret pc=0x0000014151438564 method=java.util.Arrays.equals([B[B)Z @ 2
Event: 2.621 Thread 0x000001414f753490 Uncommon trap: reason=class_check action=maybe_recompile pc=0x000001415135d188 method=java.util.regex.Pattern$Curly.match(Ljava/util/regex/Matcher;ILjava/lang/CharSequence;)Z @ 19
Event: 2.621 Thread 0x000001414f753490 Uncommon trap: reason=class_check action=maybe_recompile pc=0x000001415135d188 method=java.util.regex.Pattern$Curly.match(Ljava/util/regex/Matcher;ILjava/lang/CharSequence;)Z @ 19
Event: 2.624 Thread 0x000001414f753490 Uncommon trap: reason=class_check action=maybe_recompile pc=0x000001415135d188 method=java.util.regex.Pattern$Curly.match(Ljava/util/regex/Matcher;ILjava/lang/CharSequence;)Z @ 19
Event: 2.624 Thread 0x000001414f753490 Uncommon trap: reason=class_check action=maybe_recompile pc=0x000001415135d188 method=java.util.regex.Pattern$Curly.match(Ljava/util/regex/Matcher;ILjava/lang/CharSequence;)Z @ 19
Event: 2.631 Thread 0x000001414f753490 Uncommon trap: reason=predicate action=maybe_recompile pc=0x000001415143734c method=java.lang.String.regionMatches(ZILjava/lang/String;II)Z @ 63
Event: 2.667 Thread 0x000001414f753490 Uncommon trap: reason=class_check action=maybe_recompile pc=0x000001415142b318 method=java.util.regex.Matcher.search(I)Z @ 86
Event: 2.747 Thread 0x000001416d7d6230 Uncommon trap: reason=unstable_if action=reinterpret pc=0x00000141514cdaa0 method=java.util.concurrent.ConcurrentHashMap.putVal(Ljava/lang/Object;Ljava/lang/Object;Z)Ljava/lang/Object; @ 113
Classes redefined (0 events):
No events
Internal exceptions (10 events):
Event: 2.455 Thread 0x000001414f753490 Exception <a 'java/lang/ClassNotFoundException': org/apache/cassandra/config/EncryptionOptions$ClientEncryptionOptionsCustomizer> (0x00000005c6938828) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\classfi
Event: 2.455 Thread 0x000001414f753490 Exception <a 'java/lang/ClassNotFoundException': org/apache/cassandra/config/TransparentDataEncryptionOptionsBeanInfo> (0x00000005c694d288) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\classfile\systemDi
Event: 2.456 Thread 0x000001414f753490 Exception <a 'java/lang/ClassNotFoundException': org/apache/cassandra/config/TransparentDataEncryptionOptionsCustomizer> (0x00000005c695e860) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\classfile\system
Event: 2.464 Thread 0x000001414f753490 Exception <a 'sun/nio/fs/WindowsException'> (0x00000005c69ef788) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\prims\jni.cpp, line 710]
Event: 2.464 Thread 0x000001414f753490 Exception <a 'sun/nio/fs/WindowsException'> (0x00000005c69f0358) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\prims\jni.cpp, line 710]
Event: 2.466 Thread 0x000001414f753490 Exception <a 'sun/nio/fs/WindowsException'> (0x00000005c6a02b98) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\prims\jni.cpp, line 710]
Event: 2.466 Thread 0x000001414f753490 Exception <a 'sun/nio/fs/WindowsException'> (0x00000005c6a030c8) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\prims\jni.cpp, line 710]
Event: 2.468 Thread 0x000001414f753490 Exception <a 'sun/nio/fs/WindowsException'> (0x00000005c6a03ea0) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\prims\jni.cpp, line 710]
Event: 2.468 Thread 0x000001414f753490 Exception <a 'sun/nio/fs/WindowsException'> (0x00000005c6a043d0) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\prims\jni.cpp, line 710]
Event: 2.717 Thread 0x000001414f753490 Exception <a 'java/lang/ClassNotFoundException': javax/management/remote/rmi/RMIServerImpl_Skel> (0x00000005c83b4838) thrown at [C:\jenkins\workspace\8-2-build-windows-amd64-cygwin\jdk8u261\295\hotspot\src\share\vm\classfile\systemDictionary.cpp, line 210
Events (10 events):
Event: 2.788 loading class org/apache/cassandra/net/MessagingService done
Event: 2.789 loading class org/hyperic/sigar/FileSystemMap
Event: 2.789 loading class org/hyperic/sigar/FileSystemMap done
Event: 2.789 loading class org/apache/cassandra/net/MessagingServiceMBean
Event: 2.789 loading class org/apache/cassandra/net/MessagingServiceMBean done
Event: 2.790 loading class org/hyperic/sigar/FileSystem
Event: 2.790 loading class org/hyperic/sigar/FileSystem done
Event: 2.790 loading class org/apache/cassandra/net/MessagingService$2
Event: 2.790 loading class org/apache/cassandra/net/MessagingService$2 done
Event: 2.791 loading class org/apache/cassandra/net/MessagingService$1
| I think it is sigar-lib that cassandra uses that is causing the problem (especially on the recent JDK8).
It is not necessary to run cassandra, so you can comment out this line from cassandra-env.ps1 in conf directory:
https://github.com/apache/cassandra/blob/cassandra-3.11.7/conf/cassandra-env.ps1#L357
| Cassandra | 63,144,295 | 14 |
I'm trying to access my Cassandra server through a CQLSH client to import a huge CSV file. I'm getting a module' object has no attribute 'parse_options error.
I run the follow command:
cqlsh XXX.XXX.XX.XX XXXX --cqlversion="3.4.2" --execute="copy evolvdso.teste from '2016-10-26 15:25:10.csv' WITH DELIMITER =',' AND HEADER=TRUE --debug";
This is the debug and error message that follows:
Starting copy of evolvdso.teste with columns ['ref_equip', 'date', 'load', 'ptd_assoc'].
Traceback (most recent call last):
File "/usr/local/bin/cqlsh", line 1133, in onecmd
self.handle_statement(st, statementtext)
File "/usr/local/bin/cqlsh", line 1170, in handle_statement
return custom_handler(parsed)
File "/usr/local/bin/cqlsh", line 1834, in do_copy
rows = self.perform_csv_import(ks, cf, columns, fname, opts)
File "/usr/local/bin/cqlsh", line 1846, in perform_csv_import
csv_options, dialect_options, unrecognized_options = copyutil.parse_options(self, opts)
AttributeError: 'module' object has no attribute 'parse_options'
| Has the same issue when I use cqlsh from pip install cqlsh.
Try just use cassandra's tool cqlsh
sudo docker run -it cassandra /usr/bin/cqlsh
Refer to jira
| Cassandra | 40,289,324 | 14 |
From two different links of the Cassandra's documentation, I found:
link 1
A structure stored in memory that checks if row data exists in the memtable before accessing SSTables on disk
and
link2
Cassandra checks the Bloom filter to discover which SSTables are likely to have the request partition data.
My question is does both the above statements are right? If yes, does bloom filters maintained for a Memtable and SSTable separately? Thanks in advance.
| A Bloom filter is a generic data structure used to check if an element is present in a set or not. Its algorithm is designed to be extremely fast, at the cost of risking to return false positives.
Cassandra uses bloom filters to test if any of the SSTables is likely to contain the requested partition key or not, without actually having to read their contents (and thus avoiding expensive IO operations).
If a bloom filter returns false for a given partition key, then it is absolutely certain that the partition key is not present in the corresponding SSTable; if it returns true, however, then the SSTable is likely to contain the partition key. When this happens, Cassandra will resort to more sophisticated techniques to determine if it needs to read that SSTable or not. Note that bloom filters are consulted for most reads, and updated only during some writes (when a memtable is flushed to disk). You can read more about Cassandra's read path here.
Back to your questions:
1) The first statement ("A structure stored in memory that checks if row data exists in the memtable before accessing SSTables on disk") is IMHO not accurate: bloom filters are indeed updated when a memtable is flushed to disk, but they do not reference the memtable.
2) Bloom filters are maintained per SSTable, i.e. each SSTable on disk gets a corresponding bloom filter in memory.
| Cassandra | 39,327,427 | 14 |
I have read about C* replication,
Is setting the partitioner in Cassandra setup to Murmur Partitioner makes the cluster C* cluster?
| "C*" is an abbreviation for "Cassandra". They are the same thing.
| Cassandra | 36,028,926 | 14 |
I am trying to setup cassandra in my RHEL 6.5 server. When i start cassandara, i get an ERROR related to JNA. The exception says class not found. However, I see in the logs that the jna jar is added to the classpath. I tried using both apache-cassandra-3.0.0 and apache-cassandra-2.2.3, i am getting the same exception in both. I find that the jna jar is available in $CASSANDRA_HOME/lib and also in /usr/share/java. The jna jar version installed is 4.0.0. Any help is appreciated. Following is the startup logs -
INFO 05:57:57 Classpath: /home/cassandra-new/apache-cassandra-2.2.3/conf:/home/cassandra-new/apache-cassandra-2.2.3/build/classes/main:/home/cassandra-new/apache-cassandra-2.2.3/build/classes/thrift:/home/cassandra-new/apache-cassandra-2.2.3/lib/airline-0.6.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/antlr-runtime-3.5.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/apache-cassandra-2.2.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/apache-cassandra-clientutil-2.2.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/apache-cassandra-thrift-2.2.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-cli-1.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-codec-1.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-lang3-3.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-math3-3.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/compress-lzf-0.8.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/concurrentlinkedhashmap-lru-1.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/crc32ex-0.1.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/disruptor-3.0.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ecj-4.4.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/guava-16.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/high-scale-lib-1.0.6.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jackson-core-asl-1.9.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jackson-mapper-asl-1.9.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jamm-0.3.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/javax.inject.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jbcrypt-0.3m.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jcl-over-slf4j-1.7.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jna-4.0.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/joda-time-2.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/json-simple-1.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/libthrift-0.9.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/log4j-over-slf4j-1.7.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/logback-classic-1.1.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/logback-core-1.1.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/lz4-1.3.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/metrics-core-3.1.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/metrics-logback-3.1.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/netty-all-4.0.23.Final.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ohc-core-0.3.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ohc-core-j8-0.3.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/reporter-config3-3.0.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/reporter-config-base-3.0.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/sigar-1.6.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/slf4j-api-1.7.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/snakeyaml-1.11.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/snappy-java-1.1.1.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ST4-4.0.8.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/stream-2.5.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/super-csv-2.1.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/thrift-server-0.3.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jsr223//.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jamm-0.3.0.jar
WARN 05:57:57 JNA link failure, one or more native method will be unavailable.
WARN 05:57:57 JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO 05:57:57 Initializing SIGAR library
WARN 05:57:57 Cassandra server running in degraded mode. Is swap disabled? : false, Address space adequate? : true, nofile limit adequate? : false, nproc limit adequate? : true
INFO 05:57:58 Initializing system.sstable_activity
INFO 05:57:58 Initializing system.hints
INFO 05:57:58 Initializing system.compaction_history
INFO 05:57:58 Initializing system.peers
INFO 05:57:58 Initializing system.schema_columnfamilies
INFO 05:57:59 Initializing system.schema_functions
INFO 05:57:59 Initializing system.IndexInfo
INFO 05:57:59 Initializing system.schema_columns
INFO 05:57:59 Initializing system.schema_triggers
INFO 05:57:59 Initializing system.local
INFO 05:57:59 Initializing system.schema_usertypes
INFO 05:57:59 Initializing system.batchlog
INFO 05:57:59 Initializing system.available_ranges
INFO 05:57:59 Initializing system.schema_aggregates
INFO 05:57:59 Initializing system.paxos
INFO 05:57:59 Initializing system.peer_events
INFO 05:57:59 Initializing system.size_estimates
INFO 05:57:59 Initializing system.compactions_in_progress
INFO 05:57:59 Initializing system.schema_keyspaces
INFO 05:57:59 Initializing system.range_xfers
ERROR 05:57:59 Exception in thread Thread[MemtableFlushWriter:1,5,main]
java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
at org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:82) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.Memory.(Memory.java:74) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:274) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:288) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:73) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:168) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:75) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:107) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:84) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:424) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:367) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:352) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.2.3.jar:2.2.3]
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1134) ~[apache-cassandra-2.2.3.jar:2.2.3]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_65]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65]
| I went through the code in CLibrary.java and found following code where the exception is caught -
catch (UnsatisfiedLinkError e)
{
logger.warn("JNA link failure, one or more native method will be unavailable.");
logger.trace("JNA link failure details: {}", e.getMessage());
}
I restarted cassandra by changing the log-level in conf/logback.xml to TRACE, to print that extra detail -
<logger name="org.apache.cassandra" level="TRACE"/>
I could now see the real issue -
/tmp/jna-3506402/jna6068045839690239595.tmp: failed to map segment from shared object: Operation not permitted
This issue is caused due to noexec flag on the /tmp folder.
I then decided to change the tmp folder by changing tmpdir using option:
-Djava.io.tmpdir=/home/cassandra/tmp
That fixed the issue.
I added the options in cassandra-env.sh file. Added following statement -
JVM_OPTS="$JVM_OPTS -Djava.io.tmpdir=/home/cassandra/tmp"
| Cassandra | 34,059,020 | 14 |
I'm having trouble trying to model my data such that I can efficiently query Cassandra for the last 10 (any number actually) records that were most recently modified. Each record has a last_modified_date column that is set by the application when inserting/updating the record.
I've excluded the data columns from this example code.
Main data table (contains only one row per record):
CREATE TABLE record (
record_id int,
last_modified_by text,
last_modified_date timestamp,
PRIMARY KEY (record_id)
);
Solution 1 (Fail)
I tried to create a separate table, which used a clustering key order.
Table (one row for each record; only inserting the last modified date):
CREATE TABLE record_by_last_modified_index (
record_id int,
last_modified_by text,
last_modified_date timestamp,
PRIMARY KEY (record_id, last_modified_date)
) WITH CLUSTERING ORDER BY (last_modified_date DESC);
Query:
SELECT * FROM record_by_last_modified_index LIMIT 10
This solution does not work since the clustering order only applies to the ordering of records with the same partition key. Since each row has a different partition key (record_id) the query results don't include the expected records.
Solution 2 (Inefficient)
Another solution I have tried is to simply query Cassandra for all record_id and last_modified_date values, sort them and pick the first 10 records in my application. This is clearly inefficient and won't scale well.
Solution 3
One last solution, which I considered is using the same partition key for all records and using clustering order to ensure records are sorted correctly. The problem with that solution is that the data will not be correctly partitioned across the nodes since all of the records would have the same partition key. That seems like a non-starter to me.
| I think what you're trying to do is more of a relational database model and is somewhat of an anti-pattern in Cassandra.
Cassandra only sorts things based on clustering columns, but the sort order isn't expected to change. This is because when memtables are written to disk as SSTables (Sorted String Tables), the SSTables are immutable and can't be re-sorted efficiently. This is why you aren't allowed to update the value of a clustering column.
If you want to re-sort the clustered rows, the only way I know is to delete the old row and insert a new one in a batch. To make that even more inefficient, you would probably need to first do a read to figure out what the last_modified_date was for the record_id so that you could delete it.
So I'd look for a different approach, such as just writing the updates as new clustered rows and leave the old ones there (possibly clean them up over time using a TTL). So your newest updates would always be on top when you did a LIMIT query.
In terms of partitioning, you will need to break your data into a few categories to spread the data over your nodes. That means you won't get global sorting of your table, but only within categories, which is due to the distributed model. If you really need global sorting, then perhaps look at something like pairing Cassandra with Spark. Sorting is super expensive in time and resources, so think carefully if you really need it.
Update:
Thinking about this some more, you should be able to do this in Cassandra 3.0 using materialized views. The view would take care of the messy delete and insert for you, to re-order the clustered rows. So here's what it looks like in the 3.0 alpha release:
First create the base table:
CREATE TABLE record_ids (
record_type int,
last_modified_date timestamp,
record_id int,
PRIMARY KEY(record_type, record_id));
Then create a view of that table, using last_modified_date as a clustering column:
CREATE MATERIALIZED VIEW last_modified AS
SELECT record_type FROM record_ids
WHERE record_type IS NOT NULL AND last_modified_date IS NOT NULL AND record_id IS NOT NULL
PRIMARY KEY (record_type, last_modified_date, record_id)
WITH CLUSTERING ORDER BY (last_modified_date DESC);
Now insert some records:
insert into record_ids (record_type, last_modified_date, record_id) VALUES ( 1, dateof(now()), 100);
insert into record_ids (record_type, last_modified_date, record_id) VALUES ( 1, dateof(now()), 200);
insert into record_ids (record_type, last_modified_date, record_id) VALUES ( 1, dateof(now()), 300);
SELECT * FROM record_ids;
record_type | record_id | last_modified_date
-------------+-----------+--------------------------
1 | 100 | 2015-08-14 19:41:10+0000
1 | 200 | 2015-08-14 19:41:25+0000
1 | 300 | 2015-08-14 19:41:41+0000
SELECT * FROM last_modified;
record_type | last_modified_date | record_id
-------------+--------------------------+-----------
1 | 2015-08-14 19:41:41+0000 | 300
1 | 2015-08-14 19:41:25+0000 | 200
1 | 2015-08-14 19:41:10+0000 | 100
Now we update a record in the base table, and should see it move to the top of the list in the view:
UPDATE record_ids SET last_modified_date = dateof(now())
WHERE record_type=1 AND record_id=200;
So in the base table, we see the timestamp for record_id=200 was updated:
SELECT * FROM record_ids;
record_type | record_id | last_modified_date
-------------+-----------+--------------------------
1 | 100 | 2015-08-14 19:41:10+0000
1 | 200 | 2015-08-14 19:43:13+0000
1 | 300 | 2015-08-14 19:41:41+0000
And in the view, we see:
SELECT * FROM last_modified;
record_type | last_modified_date | record_id
-------------+--------------------------+-----------
1 | 2015-08-14 19:43:13+0000 | 200
1 | 2015-08-14 19:41:41+0000 | 300
1 | 2015-08-14 19:41:10+0000 | 100
So you see that record_id=200 moved up in the view and if you do a limit N on that table, you'd get the N most recently modified rows.
| Cassandra | 32,014,367 | 14 |
Writing data to Cassandra without causing it to create tombstones are vital in our case, due to the amount of data and speed. Currently we have only written a row once, and then never had the need to update the row again, only fetch the data again.
Now there has been a case, where we actually need to write data, and then complete it with more data, that is finished after awhile.
It can be made by either;
overwrite all of the data in a row again using INSERT (all data is available), or
performing an Update only on the new data.
What is the best way to do it, bear in mind of the speed and not creating a tombstone is of importance ?
| Tombstones will only created when deleting data or using TTL values.
Cassandra does align very well to your described use case. Incrementally adding data will work for both INSERT and UPDATE statements. Cassandra will store data in different locations in case of adding data over time for the same partition key. Periodically running compactions will merge data again for a single key to optimize access and free disk space. This will happend based on the timestamp of written values but does not create any new tombstones.
You can learn more about how Cassandra stores data e.g. here.
| Cassandra | 31,053,406 | 14 |
I have a three nodes Cassandra Cluster and I have created one table which has more than 2,000,000 rows.
When I execute this (select count(*) from userdetails) query in cqlsh, I got this error:
OperationTimedOut: errors={}, last_host=192.168.1.2
When I run count function for less row or with limit 50,000 it works fine.
| count(*) actually pages through all the data. So a select count(*) from userdetails without a limit would be expected to timeout with that many rows. Some details here:
http://planetcassandra.org/blog/counting-key-in-cassandra/
You may want to consider maintaining the count yourself, using Spark, or if you just want a ball park number you can grab it from JMX.
To grab from JMX it can be a little tricky depending on your data model. To get the number of partitions grab the org.apache.cassandra.metrics:type=ColumnFamily,keyspace={{Keyspace}},scope={{Table}},name=EstimatedColumnCountHistogram mbean and sum up all the 90 values (this is what nodetool cfstats outputs). It will only give you the number that exist in sstables so to make it more accurate you can do a flush or try to estimate number in memtables from the MemtableColumnsCount mbean
For a very basic ballpark number you can grab the estimated partition counts from system.size_estimates across all the ranges listed (note that this is only number on one node). Multiply that out by number of nodes, then divided by RF.
| Cassandra | 29,394,382 | 14 |
I am new to Cassandra and I want to install it. So far I've read a small article on it.
But there one thing that I do not understand and it is the meaning of 'node'.
Can anyone tell me what a 'node' is, what it is for, and how many nodes we can have in one cluster ?
| A node is the storage layer within a server.
Newer versions of Cassandra use virtual nodes, or vnodes. There are 256 vnodes per server by default.
A vnode is essentially the storage layer.
machine: a physical server, EC2 instance, etc.
server: an installation of Cassandra. Each machine has one installation of Cassandra. The Cassandra server runs core processes such as the snitch, the partitioner, etc.
vnode: The storage layer in a Cassandra server. There are 256 vnodes per server by default.
Helpful tip:
Where you will get confused is that Cassandra terminology (in older blog posts, YouTube videos, and so on) had been used inconsistently. In older versions of Cassandra, each machine had one Cassandra server installed, and each server contained one node. Due to the 1-to-1-to-1 relationship between machine-server-node in old versions of Cassandra people previously used the terms machine, server and node interchangeably.
| Cassandra | 28,456,921 | 14 |
My table looks like this
create table Notes(
user_id varchar,
real_time timestamp,
insertion_time timeuuid,
read boolean PRIMARY KEY (user_id,real_time,insertion_time)
);
create index read_index on Notes (read);
I want update all the rows with user_id = 'xxx' without having to specify all the clustering indexes.
UPDATE Notes SET read = true where user_id = 'xxx'; // Says Error
Error: message="Missing mandatory PRIMARY KEY part real_time
I have tried creating a secondary index, but its not allowed on the primary key.
How can i solve this?
I chose user_id to be in the primary key cause i want to be able to do select * from Notes where user_id = 'xxx' should be possible.
| While this might be possible with a RDBMS and SQL, it is not possible with cql in Cassandra. From the DataStax documentation on the UPDATE command:
Each update statement requires a precise set of primary keys to be specified using a WHERE clause. You need to specify all keys in a table having compound and clustering columns.
You'll probably need to write something quick in Python (or one of the other drivers) to perform this type of update.
| Cassandra | 27,295,679 | 14 |
In Cassandra, I can create a composite partition key, separate from my clustering key:
CREATE TABLE footable (
column1 text,
column2 text,
column3 text,
column4 text,
PRIMARY KEY ((column1, column2))
)
As I understand it, quering by partition key is an extremely efficient (the most efficient?) method for retrieving data. What I don't know, however, is whether it's also efficient to query by only part of a composite partition key.
In MSSQL, this would be efficient, as long as components are included starting with the first (column1 instead of column2, in this example). Is this also the case in Cassandra? Is it highly efficient to query for rows based only on column1, here?
| This is not the case in Cassandra, because it is not possible. Doing so will yield the following error:
Partition key part entity must be restricted since preceding part is
Check out this Cassandra 2014 SF Summit presentation from DataStax MVP Robbie Strickland titled "CQL Under the Hood." Slides 62-64 show that the complete partition key is used as the rowkey. With composite partitioning keys in Cassandra, you must query by all of the rowkey or none of it.
You can watch the complete presentation video here.
| Cassandra | 27,277,025 | 14 |
I have a delicate Spark problem, where i just can't wrap my head around.
We have two RDDs ( coming from Cassandra ). RDD1 contains Actions and RDD2 contains Historic data. Both have an id on which they can be matched/joined. But the problem is the two tables have an N:N relation ship. Actions contains multiple rows with the same id and so does Historic. Here are some example date from both tables.
Actions time is actually a timestamp
id | time | valueX
1 | 12:05 | 500
1 | 12:30 | 500
2 | 12:30 | 125
Historic set_at is actually a timestamp
id | set_at| valueY
1 | 11:00 | 400
1 | 12:15 | 450
2 | 12:20 | 50
2 | 12:25 | 75
How can we join these two tables in a way, that we get a result like this
1 | 100 # 500 - 400 for Actions#1 with time 12:05 because Historic was in that time at 400
1 | 50 # 500 - 450 for Actions#2 with time 12:30 because H. was in that time at 450
2 | 50 # 125 - 75 for Actions#3 with time 12:30 because H. was in that time at 75
I can't come up with a good solution that feels right, without making a lot of iterations over huge datasets. I always have to think about making a range from the Historic set and then somehow check if the Actions fits in the range e.g (11:00 - 12:15) to make the calculation. But that seems to pretty slow to me. Is there any more efficient way to do that? Seems to me, that this kind of problem could be popular, but i couldn't find any hints on this yet. How would you solve this problem in spark?
My current attempts so far ( in half way done code )
case class Historic(id: String, set_at: Long, valueY: Int)
val historicRDD = sc.cassandraTable[Historic](...)
historicRDD
.map( row => ( row.id, row ) )
.reduceByKey(...)
// transforming to another case which results in something like this; code not finished yet
// (List((Range(0, 12:25), 400), (Range(12:25, NOW), 450)))
// From here we could join with Actions
// And then some .filter maybe to select the right Lists tuple
| It's an interesting problem. I also spent some time figuring out an approach. This is what I came up with:
Given case classes for Action(id, time, x) and Historic(id, time, y)
Join the actions with the history (this might be heavy)
filter all historic data not relevant for a given action
key the results by (id,time) - differentiate same key at different times
reduce the history by action to the max value, leaving us with relevant historical record for the given action
In Spark:
val actionById = actions.keyBy(_.id)
val historyById = historic.keyBy(_.id)
val actionByHistory = actionById.join(historyById)
val filteredActionByidTime = actionByHistory.collect{ case (k,(action,historic)) if (action.time>historic.t) => ((action.id, action.time),(action,historic))}
val topHistoricByAction = filteredActionByidTime.reduceByKey{ case ((a1:Action,h1:Historic),(a2:Action, h2:Historic)) => (a1, if (h1.t>h2.t) h1 else h2)}
// we are done, let's produce a report now
val report = topHistoricByAction.map{case ((id,time),(action,historic)) => (id,time,action.X -historic.y)}
Using the data provided above, the report looks like:
report.collect
Array[(Int, Long, Int)] = Array((1,43500,100), (1,45000,50), (2,45000,50))
(I transformed the time to seconds to have a simplistic timestamp)
| Cassandra | 27,138,392 | 14 |
I am trying to evaluate number of tombstones getting created in one of tables in our application. For that I am trying to use nodetool cfstats. Here is how I am doing it:
create table demo.test(a int, b int, c int, primary key (a));
insert into demo.test(a, b, c) values(1,2,3);
Now I am making the same insert as above. So I expect 3 tombstones to be created. But on running cfstats for this columnfamily, I still see that there are no tombstones created.
nodetool cfstats demo.test
Average live cells per slice (last five minutes): 0.0
Average tombstones per slice (last five minutes): 0.0
Now I tried deleting the record, but still I don't see any tombstones getting created. Is there any thing that I am missing here? Please suggest.
BTW a few other details,
* We are using version 2.1.1 of the Java driver
* We are running against Cassandra 2.1.0
| For tombstone counts on a query your best bet is to enable tracing. This will give you the in depth history of a query including how many tombstones had to be read to complete it. This won't give you the total tombstone count, but is most likely more relevant for performance tuning.
In cqlsh you can enable this with
cqlsh> tracing on;
Now tracing requests.
cqlsh> SELECT * FROM ascii_ks.ascii_cs where pkey = 'One';
pkey | ckey1 | data1
------+-------+-------
One | One | One
(1 rows)
Tracing session: 2569d580-719b-11e4-9dd6-557d7f833b69
activity | timestamp | source | source_elapsed
--------------------------------------------------------------------------+--------------+-----------+----------------
execute_cql3_query | 08:26:28,953 | 127.0.0.1 | 0
Parsing SELECT * FROM ascii_ks.ascii_cs where pkey = 'One' LIMIT 10000; | 08:26:28,956 | 127.0.0.1 | 2635
Preparing statement | 08:26:28,960 | 127.0.0.1 | 6951
Executing single-partition query on ascii_cs | 08:26:28,962 | 127.0.0.1 | 9097
Acquiring sstable references | 08:26:28,963 | 127.0.0.1 | 10576
Merging memtable contents | 08:26:28,963 | 127.0.0.1 | 10618
Merging data from sstable 1 | 08:26:28,965 | 127.0.0.1 | 12146
Key cache hit for sstable 1 | 08:26:28,965 | 127.0.0.1 | 12257
Collating all results | 08:26:28,965 | 127.0.0.1 | 12402
Request complete | 08:26:28,965 | 127.0.0.1 | 12638
http://www.datastax.com/dev/blog/tracing-in-cassandra-1-2
| Cassandra | 27,063,508 | 14 |
Given an example of the following select in CQL:
SELECT * FROM tickets WHERE ID IN (1,2,3,4)
Given ID is a partition key, is using IN relation better than doing multiple queries or is there no difference?
| I remembered seeing someone answer this question in the Cassandra user mailing list a short while back, but I cannot find the exact message right now. Ironically, Cassandra Evangelist Rebecca Mills just posted an article that addresses this issue (Things you should be doing when using Cassandra drivers...points #13 and #22). But the answer is "yes" that in some cases, multiple, parallel queries would be faster than using an IN. The underlying reason can be found in the DataStax SELECT documentation.
When not to use IN
...Using IN can degrade performance because
usually many nodes must be queried. For example, in a single, local
data center cluster with 30 nodes, a replication factor of 3, and a
consistency level of LOCAL_QUORUM, a single key query goes out to two
nodes, but if the query uses the IN condition, the number of nodes
being queried are most likely even higher, up to 20 nodes depending on
where the keys fall in the token range.
So based on that, it would seem that this becomes more of a problem as your cluster gets larger.
Therefore, the best way to solve this problem (and not have to use IN at all) would be to rethink your data model for this query. Without knowing too much about your schema, perhaps there are attributes (column values) that are shared by ticket IDs 1, 2, 3, and 4. Maybe using something like level or group (if tickets are for a particular venue) or maybe even an event (id), instead.
Basically, while using a unique, high-cardinality identifier to partition your data sounds like a good idea, it actually makes it harder to query your data (in Cassandra) later on. If you could come up with a different column to partition your data on, that would certainly help you in this case. Regardless, creating a new, specific column family (table) to handle queries for those rows is going to be a better approach than using IN or multiple queries.
| Cassandra | 26,999,098 | 14 |
Executing two identical requests but the DISTINCT keyword gives unexpected results. Without the keyword, the result is ok but with DISTINCT, it looks like the where clause is ignored. Why ?
Cqlsh version:
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Table considered:
DESCRIBE TABLE events;
CREATE TABLE events (
userid uuid,
"timestamp" timestamp,
event_type text,
data text,
PRIMARY KEY (userid, "timestamp", event_type)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
Table content:
SELECT * FROM events;
userid | timestamp | event_type | data
--------------------------------------+--------------------------+------------+------
aaaaaaaa-be1c-44ab-a0e8-f25cf6064b0e | 1970-01-17 09:06:17+0100 | toto | null
4271a78f-be1c-44ab-a0e8-f25cf6064b0e | 1970-01-17 09:06:17+0100 | toto | null
4271a78f-be1c-44ab-a0e8-f25cf6064b0e | 1970-01-17 09:07:17+0100 | toto | null
4271a78f-be1c-44ab-a0e8-f25cf6064b0e | 1970-01-17 09:08:17+0100 | toto | null
4271a78f-be1c-44ab-a0e8-f25cf6064b0e | 1970-01-17 09:09:17+0100 | toto | null
4271a78f-be1c-44ab-a0e8-f25cf6064b0e | 1970-01-17 09:10:17+0100 | toto | null
(6 rows)
Request1: Request without DISTINCT
SELECT userid FROM events WHERE timestamp > '1970-01-17 09:07:17+0100' ALLOW FILTERING;
userid
--------------------------------------
4271a78f-be1c-44ab-a0e8-f25cf6064b0e
4271a78f-be1c-44ab-a0e8-f25cf6064b0e
4271a78f-be1c-44ab-a0e8-f25cf6064b0e
(3 rows)
Request2: Same request with DISTINCT
SELECT DISTINCT userid FROM events WHERE timestamp > '1970-01-17 09:07:17+0100' ALLOW FILTERING;
userid
--------------------------------------
aaaaaaaa-be1c-44ab-a0e8-f25cf6064b0e
4271a78f-be1c-44ab-a0e8-f25cf6064b0e
(2 rows)
EDIT 1
here is some context.
This table "events" is subject to a lot of writes, it receives around ~1k insertions per second and I have a batch script that checks those events every 5 minutes.
This batch script has 2 needs:
1- get all userids that have been active in the last 5 minutes (i.e every userid present in the events from the last 5 minutes)
2- get all events related to those userids (not only for the last 5 minutes)
I used to have two different tables to handle this. One table "activeusers" for the first request and the "events" table like I have described here for the second request. My problem with that is just that it requires from my server to write in two different tables when it receives an event. So I tried this using only the events table.
| It happens that way because in Cassandra CQL DISTINCT is designed to return only the partition (row) keys of your table (column family)...which must be unique. Therefore, the WHERE clause can only operate on partition keys when used with DISTINCT (which in your case, isn't terribly useful). If you take the DISTINCT out, WHERE can then be used to evaluate the clustering (column) keys within each partition key (albeit, with ALLOW FILTERING).
I feel compelled to mention that ALLOW FILTERING is not something you should be doing a whole lot of...and definitely not in production. If that query is one you need to run often (querying events for userids after a certain timestamp) then I would suggest partitioning your data by event_type instead:
PRIMARY KEY (event_type, "timestamp", userid)
Then you'll be able to run this query without ALLOW FILTERING.
SELECT userid FROM events WHERE event_type='toto' AND timestamp > '1970-01-17 09:07:17+0100'
Without knowing anything about your application or use case, that may or may not be useful to you. But consider it as an example, and as an indication that there may be a better way build your model to satisfy your query pattern(s). Check out Patrick McFadin's article on timeseries data modeling for more ideas on how to model for this problem.
| Cassandra | 26,548,788 | 14 |
We are currently evaluating Cassandra as the data store for an analytical application. The plan was to dump raw data in Cassandra and then run mainly aggregation queries over it. Looking at CQL, it does not seem to support some traditional SQL operators like:
Typical aggregation functions like average, sum, count-Distinct etc.
Groupby-having operators
I did not find anything that can help achieve the above in the documentation. Also checked if there were any hooks for providing such functions as extensions. Say like in database map-reduce in Mongodb, or user-defined-functions in Relational DBs.
People do talk about the paid Datastax Enterprise Edition, and that too achieves this not via plain Cassandra, but through separate components like Hadoop-Hive-Pig-Hadoop etc. Or there are suggestions about doing needed pre-aggregations before dumping data to the DB since Cassandra writes are fast.
It looked like too much of overheads, at least for basic stuff we need. Am I missing something fundamental here?
Would highly appreciate help on this.
| Aggregation is available in cassandra as part of CASSANDRA-4914 which is available in the 2.2.0-rc1 release.
| Cassandra | 23,532,128 | 14 |
I'm currently using the Datastax Cassandra driver for Cassandra 2 to execute cql3. This works correctly. I started using PreparedStatement's:
Session session = sessionProvider.getSession();
try {
PreparedStatement ps = session.prepare(cql);
ResultSet rs = session.execute(ps.bind(objects));
if (irsr != null) {
irsr.read(rs);
}
}
Sometimes I get a warning from the driver in my log:
Re-preparing already prepared query . Please note that preparing the same query more than once is generally an anti-pattern and will likely affect performance. Consider preparing the statement only once.
This warning makes sense, but i'm not sure how i should reuse the PreparedStatement?
Should I just create all my PreparedStatement in a constructor/init method and than simply use them?
But does this go well when multiple threads use the same PreparedStatement at the same time (especially calling PreparedStatement.bind() to bind objects)
| You may just initialize the PreparedStatement once and cache it while the app is running. It should be available for use as long as the Cassandra cluster is up.
Using the statement from multiple threads is fine (as long as you don't modify it throught setXXX() methods). When you call bind(), the code underneath only reads the PreparedStatement and then creates a new instance of BoundStatement() which the caller thread is then free to mutate.
Here is the source code, if you're curious (search for bind()).
| Cassandra | 22,915,840 | 14 |
I'm trying to understand the connection pooling in Datastax Cassandra Driver, so I can better use it in my web service.
I have version 1.0 of the documentation. It says:
The Java driver uses connections asynchronously, so multiple requests can be submitted on the same connection at the same time.
What do they understand by connection? When connecting to a cluster, we have: a Builder, a Cluster and a Session. Which one of them is the connection?
For example, there is this parameter:
maxSimultaneousRequestsPerConnection - number of simultaneous requests on all connections
to a host after which more connections are created.
So, these connections are automatically created, in the case of connection pooling (which is what I would expect). But what exactly are the connections? Cluster objects? Sessions?
I'm trying to decide what to keep 'static' in my web service. For the moment, I decided to keep the Builder static, so for every call I create a new Cluster and a new Session. Is this ok? If the Cluster is the Connection, then it should be ok. But is it? Now, the logger says, for every call:
2013:12:06 12:05:50 DEBUG Cluster:742 - Starting new cluster with contact points
2013:12:06 12:05:50 DEBUG ControlConnection:216 - [Control connection] Refreshing node list and token map
2013:12:06 12:05:50 DEBUG ControlConnection:219 - [Control connection] Refreshing schema
2013:12:06 12:05:50 DEBUG ControlConnection:147 - [Control connection] Successfully connected to...
So, it connects to the Cluster every time? It's not what I want, I want to reuse connections.
So, the connection is actually the Session? If this is the case, I should keep the Cluster static, not the Builder.
What method should I call, to be sure I reuse connections, whenever possible?
| The accepted answer (at the time of this writing) is giving the correct advice:
As long as you use the same Session object, you [will] be reusing connections.
However, some parts were originally oversimplified. I hope the following provides insight into the scope of each object type and their respective purposes.
Builder ≠ Cluster ≠ Session ≠ Connection ≠ Statement
A Cluster.Builder is used to configure and create a Cluster
A Cluster represents the entire Cassandra ring
A ring consists of multiple nodes (hosts), and the ring can support one or more keyspaces. You can query a Cluster object about cluster- (ring)-level properties.
I also think of it as the object that represents the calling application to the ring. You communicated your application's needs (e.g. encryption, compression, etc.) to the builder, but it is this object that first implements/communicates with the actual C* ring. If your application uses more than one authentication credential for different users/purposes, you likely have different Cluster objects even if they connect to the same ring.
A Session itself is not a connection, but it manages them
A session may need to talk to all nodes in the ring, which cannot be done with a single TCP connection except in the special case of rings that contain exactly one(1) node. The Session manages a connection pool, and that pool will generally have at least one connection for each node in the ring.
This is why you should re-use Session objects as much as possible. An application does not directly manage or access connections.
A Session is accessed from the Cluster object; it is usually "bound" to a single keyspace at a time, which becomes the default keyspace for the statements executed from that session. A statement can use a fully-qualified table name (e.g. keyspacename.tablename) to access tables in other keyspaces, so it's not required to use multiple sessions to access data across keyspaces. Using multiple sessions to talk to the same ring increases the total number of TCP connections required.
A Statement executes within a Session
Statements can be prepared or not, and each one either mutates data or queries it (and in some cases, both). The fastest, most efficient statements need to communicate with at most one node, and a Session from a topology-aware Cluster should contact only that node (or one of its peers) on a single TCP connection. The least efficient statements must touch all replicas (a majority of nodes), but that will be handled by the coordinator node on the ring itself, so even for these statements the Session will only use a single connection from the application.
Also, versions 2 and 3 of the Cassandra binary protocol used by the driver use multiplexing on the connections. So while a single statement requires at least one TCP connection, that single connection can potentially service up to 128 or 32k+ asynchronous requests simultaneously, depending on the protocol version (respectively).
| Cassandra | 20,421,763 | 14 |
Let's say I use CQL to define this table.
CREATE TABLE songs (
id uuid PRIMARY KEY,
title text,
album text,
artist text,
tags set<text>,
data blob);
How can other developers (or myself after a few weeks) (re)discover the layout of this table?
I'm thinking of an equivalent to the MySQL DESCRIBE {tablename} command.
[EDIT]
I see there is a DESCRIBE method in Cassandra's command line interface (CLI), but upon using it, it states that it doesn't include information on CQL tables in its results.
| You should try the cqlsh tool which will show you exactly what you want:
lyubent@vm: ~$ ./cqlsh
cqlsh> use system;
cqlsh> describe columnfamily local;
CREATE TABLE local (
key text PRIMARY KEY,
bootstrapped text,
cluster_name text,
cql_version text,
data_center text,
gossip_generation int,
host_id uuid,
partitioner text,
rack text,
release_version text,
schema_version uuid,
thrift_version text,
tokens set<text>,
truncated_at map<uuid, blob>
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='information about the local node' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=0 AND
read_repair_chance=0.000000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
EDIT
Although great at the time the blog i linked is ood. To run cqlsh in windows:
first install python 2.7.x (not python 3!)
download
Add python to your path (as a new environment variable)
Run the setup by navigating to
C:\dir\to\cassandra\pylib in a cmd prompt and executing the below
line:
python setup.py install
GZ. Now you have cqlsh on windows.
| Cassandra | 18,106,043 | 14 |
I'm using Cassandra to store pictures. We are currently mass migrating pictures from an old system. Everything works great for a while, but eventually we'd get a TimedOutException when saving which I assume is because the work queue was filled.
However, after waiting (several hours) for it to finish, the situation continues the same (it doesn't recover itself after stopping the migration)
There seems to be a problem with only 1 node, on which its tpstats command shows the following data
The pending MutationStage operations keep increasing even though we stopped the inserts hours ago.
What exactly does that mean? What is the MutationStage?
What can I check to see why it isn't stabilising after so long? All the other servers in the ring are at 0 pending operations.
Any new insert we attempt throws the TimedOutException... exception.
This is the ring information in case it's useful
(the node with issues is the first one)
EDIT: The last lines in the log are as follows
INFO [OptionalTasks:1] 2013-02-05 10:12:59,140 MeteredFlusher.java (line 62) flushing high-traffic column family CFS(Keyspace='pics_persistent', ColumnFamily='master') (estimated 92972117 bytes)
INFO [OptionalTasks:1] 2013-02-05 10:12:59,141 ColumnFamilyStore.java (line 643) Enqueuing flush of Memtable-master@916497516(74377694/92972117 serialized/live bytes, 141 ops)
INFO [OptionalTasks:1] 2013-02-05 10:14:49,205 MeteredFlusher.java (line 62) flushing high-traffic column family CFS(Keyspace='pics_persistent', ColumnFamily='master') (estimated 80689206 bytes)
INFO [OptionalTasks:1] 2013-02-05 10:14:49,207 ColumnFamilyStore.java (line 643) Enqueuing flush of Memtable-master@800272493(64551365/80689206 serialized/live bytes, 113 ops)
WARN [MemoryMeter:1] 2013-02-05 10:16:10,662 Memtable.java (line 197) setting live ratio to minimum of 1.0 instead of 0.0015255633589225548
INFO [MemoryMeter:1] 2013-02-05 10:16:10,663 Memtable.java (line 213) CFS(Keyspace='pics_persistent', ColumnFamily='master') liveRatio is 1.0 (just-counted was 1.0). calculation took 38ms for 86 columns
INFO [OptionalTasks:1] 2013-02-05 10:16:33,267 MeteredFlusher.java (line 62) flushing high-traffic column family CFS(Keyspace='pics_persistent', ColumnFamily='master') (estimated 71029403 bytes)
INFO [OptionalTasks:1] 2013-02-05 10:16:33,269 ColumnFamilyStore.java (line 643) Enqueuing flush of Memtable-master@143498560(56823523/71029403 serialized/live bytes, 108 ops)
INFO [ScheduledTasks:1] 2013-02-05 11:36:27,798 GCInspector.java (line 122) GC for ParNew: 243 ms for 1 collections, 1917768456 used; max is 3107979264
INFO [ScheduledTasks:1] 2013-02-05 13:00:54,090 GCInspector.java (line 122) GC for ParNew: 327 ms for 1 collections, 1966976760 used; max is 3107979264
| I guess you're just overloading one of your nodes with writes - i.e. you write faster than it is capable to digest. This is pretty easy if your writes are huge.
The MutationStage is increasing even after you stopped writing to the cluster, because the other nodes are still processing queued mutation requests and send replicas to this overloaded node.
I don't know why one of the node gets overloaded, because there may be several reasons:
the node is slower than the others (different hardware or different configuration)
the cluster is not properly balanced (however, the beginning of your nodetool ring output suggests it is not the case)
you're directing all your writes to this particular node instead of distributing them to all nodes equally, e.g. by round-robin
you configured too big total memtables size limit / or cache sizes for too little total heap space and your nodes are struggling with GC and it just happened that this one was the first one to fall into the GC death spiral
| Cassandra | 14,714,413 | 14 |
We have a lot of user interaction data from various websites stored in Cassandra such as cookies, page-visits, ads-viewed, ads-clicked, etc.. that we would like to do reporting on. Our current Cassandra schema supports basic reporting and querying. However we also would like to build large queries that would typically involve Joins on large Column Families (containing millions of rows).
What approach is best suited for this? One possibility is to extract data out to a relational database such as mySQL and do data mining there. Alternate could be to attempt at use hadoop with hive or pig to run map reduce queries for this purpose? I must admit I have zero experience with the latter.
Anyone have experience of performance differences in one one vs the other? Would you run map reduce queries on a live Cassandra production instance or on a backup copy to prevent query load from affecting write performance?
| In my experience Cassandra is better suited to processes where you need real-time access to your data, fast random reads and just generally handle large traffic loads. However, if you start doing complex analytics, the availability of your Cassandra cluster will probably suffer noticeably. In general from what I've seen it's in your best interest to leave the Cassandra cluster alone, otherwise the availability starts suffering.
Sounds like you need an analytics platform, and I would definitely advise exporting your reporting data out of Cassandra to use in an offline data-warehouse system.
If you can afford it, having a real data-warehouse would allow you to do complex queries with complex joins on multiples tables. These data-warehouse systems are widely used for reporting, here is a list of what are in my opinion the key players:
Netezza
Aster/TeraData
Vertica
A recent one which is gaining a lot of momentum is Amazon Redshift, but it is currently in beta, but if you can get your hands on it you could give this a try since it looks like a solid analytics platform with a pricing much more attractive than the above solutions.
Alternatives like using Hadoop MapReduce/Hive/Pig are also interesting to look at, but probably not a replacement for Hadoop technologies. I would recommend Hive if you have a SQL background because it will be very easy to understand what you're doing and you can scale easily. There are actually already libraries integrated with Hadoop, like Apache Mahout, which allow you to do data-mining on a Hadoop cluster, you should definitely give this a try and see if it fits your needs.
To give you an idea, an approach that I've used that has been working well so far is pre-aggregating the results in Hive and then have the reports themselves generated in a data-warehouse like Netezza to compute complex joins .
| Cassandra | 14,532,230 | 14 |
I am pretty new to Cassandra, just started learning Cassandra a week ago.
I first read, that it was a NoSQL, but when i started using CQL,
I started to wonder whether Cassandra is a NoSQL or SQL DB?
Can someone explain why CQL is more or less like SQL?
| CQL is declarative like SQL and the very basic structure of the query component of the language (select things where condition) is the same. But there are enough differences that one should not approach using it in the same way as conventional SQL.
The obvious items: 1. There are no joins or subqueries. 2. No transactions
Less obvious but equally important to note:
Except for the primary key, you can only apply a WHERE condition on a column if you have created an index on that column. In SQL, you don't have to index a column to filter on it but in CQL the select statement will fail outright.
There are no OR or NOT logical operators, only AND. It is very important to model your data so you won't need these two; it is very easy to accidentally forget.
Date handling is profoundly different. CQL permits ONLY the equal operator for timestamps so extremely common and useful expressions like this do not work: where dateField > TO_TIMESTAMP('2013-01-01','YYYY-MM-DD') Also, CQL does not permit string insert of dates accurate to millis (seconds only) -- but it does permit entry of millis since epoch as a long int -- which most other DB engines do NOT permit. Lastly, timezone (as GMT offset) is invisibly captured for both long millis and string formats without a timezone. This can lead to confusion for those systems that deliberately do not conflate local time + GMT offset.
You can ONLY update a table based on primary key (or an IN list of primary keys). You cannot update based on other column data, nor can you do a mass update like this: update table set field = value; CQL demands a where clause with the primary key.
Grammar for AND does not permit parens. TO be fair, it's not necessary because of the lack of the OR operator but this means traditional SQL rewriters that add "protective" parens around expressions will not work with CQL, e.g.: select * from www where (str1 = 'foo2') and (dat1 = 12312442);
In general, it is best to use Cassandra as a big, resilient permastore of data for which a small number of very high level, very high performance queries can be applied to drag out a subset of data to work with at the application layer. That subset might be 1 million rows, yes. CQL and the Cassandra model is not designed for 2 page long SELECT statements with embedded cases, aggregations, etc. etc.
| Cassandra | 11,154,547 | 14 |
There are four high level APIs to access Cassandra and I do not have time to try them all. So I hoped to find somebody who could help me to choose the proper one.
I'll try to write down my findings about them:
Datanucleus-Cassandra-Plugin
pros:
supports JPA1, JPA2, JDO1 - JDO3 - as I read in a review, JDO scales better than Hibernate with JPA
all the pros as mentioned in kundera?
cons:
no exeirience with JDO up to now (relevant only for me of course ;)
documentation not found!
kundera
pros:
JPA 1.0 annotations with all advantages (standard conform, no boilerplate code, ...)
promise for following features in near future: JPA listeners, @PrePersist @PostPersist etc. - Relationships, @OneToMany, @ManyToMany etc. - Transactional support, @Transactional
cons:
early development stage of the plugin?
bugs?
no possibillity to fix problems in the JDO / JPA framework?
s7 pelops
pros:
pure java api --> finer control over persistence?
cons:
pure java api --> boilerplate code
hector 0.7
pros:
mavenized
spring integration --> dependency injection
pure java api --> finer control over persistence?
jmx monitoring?
managing of nodes seems to be easy and flexible
cons:
pure java api (no annotations) --> boiler plate code
Conclusion so far
As I am confident with RDMS, Hibernate, JPA, Spring and not so up to date anymore with EJB, my first impression was, to go for kundera would have been the right choice. But after reading some posts regarding JPO, DataNucleus, I am not sure anymore. As the learning curve should be steep (also for expirienced JPA developers?) for DataNucleus, I am not sure, whether I should go for it.
My major concern is the status of the plugin. Also the forum support/help for JDO and Datanucleus-Cassandra-Plugin, as it is not as wide spread, as far as I understood.
Is anybody out there, who has experience, with some of the framworks already and can give me a hint? Maybe a mixed strategy would make sense as well. In cases (if they exist) JDO is not flexible/sufficient/whatever enough for my needs, to fall back to one of the easier APIs of pelops or hector? Is this possible? Is there an approach like in JPA to get an sql connection and fetch/put data?
After reading a bit on, I found following additional information:
Datanucleus-Cassandra-Plugin is based on the pelops, which also can be accessed for more flexibility, more performance (?), which should be used on the column families with a lot of data, JDO/JPA access should be only used on "administrative" data, where performance is not so important and data amount is not overwhelming.
Which still leaves the question open to start with hector or pelops.
pelops for it's later Datanucleus-Cassandra-Plugin extensibility, or
hector for it's more sufficient support on node hanldling.
| I tried most of these solutions and find hector the best. Even when you have some problem you can always reach people who wrote hector in #cassandra in freenode. and the code is more mature as far as I concern. In cassandra client the most critical part would be connection pooling management (since all the clients do mostly the same operations through thrift, but connection pooling is what makes high level client roll). In that case I would vote for hector since I am using it in production for over a year now with no visible problem (1 reconnect issue fixed as soon as I discovered and send an email about it).
I am still using cassandra 0.6 though.
| Cassandra | 5,232,123 | 14 |
What is the meaning of eventual consistency in Cassandra when nodes in a single cluster do not contain the copies of same data but data is distributed among nodes. Now since a single peice of data is recorded at a single place (node). Why wouldn't Cassandra return the recent value from that single place of record? How do multiple copies arise in this situation?
| Cassandra's consistency is tunable. What can be tuned?
Number of nodes needed to agree on the data for reads... call it R
Number of nodes needed to agree on the data for writes... call it W
In case of 3 nodes, if we chose 2R and 2W, then during a read, if 2 nodes agree on a value, that is the true value. The 3rd may or may not have the same value.
In case of write, if 2W is chosen, then if data is written to 2 nodes, it is considered enough. This model IS consistent.
If R + W > N where N is number of nodes, it will be eventually consistent.
Cassandra maintains a timestamp with each column and each field of column to eventually become consistent. There is a background mechanism to reach a consistent state.
But like I said, if R + W > N, then it is consistent solid. That is why consistency is considered tunable in Cassandra.
Full consistency has to be reached at some point. This can be done using read repair i.e. during a read from say 3 nodes, 2 return a value, and 3rd is out of date, then a repair can be performed by cassandra on the 3rd node. This can also be done by a batch job from time to time.
| Cassandra | 4,584,353 | 14 |
Lately I have been reading a lot of blog topics about big sites(facebook, twitter, digg, reddit to name a few) using cassandra as their datastore instead of MysqL.
I would like to gather a list of resources to learn using cassandra. Hopefully some videos or podcasts explaining how to use cassandra.
My list
Twissandra - Twissandra is an example project, created to learn and demonstrate how to use Cassandra. Running the project will present a website that has similar functionality to Twitter
WTF is a supercolumn - WTF is a SuperColumn? An Intro to the Cassandra Data Model
I hope there are resources to watch howto use cassandra.
Many thanks,
Alfred
| I found these articles really helpful coming from the relational world:
http://www.sodeso.nl/?p=80
http://www.sodeso.nl/?p=108
http://www.sodeso.nl/?p=207
Right now the docs available for cassandra is limited in some places. I've been watching the Cassandra user and dev email list like a hawk. That seems to be where most of the FAQ's live.
| Cassandra | 2,438,362 | 14 |
Surely one can run a single node cluster but I'd like some level of fault-tolerance.
At present I can afford to lease two servers (8GB RAM, private VLAN @1GigE) but not 3.
My understanding is that 3 nodes is the minimum needed for a Cassandra cluster because there's no possible majority between 2 nodes, and a majority is required for resolving versioning conflicts. Oh wait, am I thinking of "vector clocks" and Riak? Ack! Cassandra uses timestamps for conflict resolution.
For 2 nodes, what is the recommended read/write strategy? Should I generally write to ALL (both) nodes and read from ONE (N=2; W=N/2+1; W=2/2+1=2)? Cassandra will use hinted-handoff as usual even for 2 nodes, yes?
These 2 servers are located in the same data center FWIW.
Thanks!
| If you need availability on a RF=2, clustersize=2 system, then you can't use ALL or you will not be able to write when a node goes down.
That is why people recommend 3 nodes instead of 2, because then you can do quorum reads+writes and still have both strong consistency and availability if a single node goes down.
With just 2 nodes you get to choose whether you want strong consistency (write with ALL) or availability in the face of a single node failure (write with ONE) but not both. Of course if you write with ONE cassandra will do hinted handoff etc as needed to make it eventually consistent.
| Cassandra | 2,330,562 | 14 |
We're planning on migrating our environment from Java 8 to OpenJDK 10. Doing this on my local machine, I've found that Cassandra will no longer start for me, giving the following error :
I can't find any solid information online that says it is definitely not supported.
This post from 4 months ago suggests that they do not support Java 10, but doesn't say it is confirmed, and is more inferred. There is also a comment on it from another user saying they have managed to get it running on Java 11.
The final comment on this ticket on datastax says "We've updated our CI matrix to include Java 10 and everything works except for the aforementioned OSGi testing issues." I'm not sure what to take away from that, but it seems to imply that it is working with Java 10 now, as the ticket is marked as resolved.
This ticket, they discuss support for Java 11. There are a few comments discussing the need to even support Java 10, but they don't really give a definitive answer on whether they will or not.
Finally this blog discusses a way to get Java 11 working with cassandra. However I notice this is using Cassandra 4.0. Has this officially been released? I notice on their website they say the release date is tbd and says the current stable release is 3.11.3, and there is no mention of it on their compatibility page.
I currently installed Cassandra on windows via Datastax, but I have also tried cloning the current git repository and running it from there, but I get the same error message (although on their github they do seem to say it has only been tested with Java 8).
Do they simply not support 10 then? Also if anyone knows if they plan to release 4.0 soon, and if that will definitely support 11 (and I assume 10 ?), that would be a massive help.
| Cassandra 4.0 has explicit support for both Java 8 and Java 11. In fact, they even split-up the configuration files as such:
$ pwd
/Users/aaron/local/apache-cassandra-4.0-SNAPSHOT/conf
$ ls -a jvm*
jvm-clients.options jvm11-clients.options jvm8-clients.options
jvm-server.options jvm11-server.options jvm8-server.options
The reason for support of these specific versions is two-fold. First of all, Java 8 has been the de-facto standard for Cassandra for a few years now. Users expect that it will still work on Java 8 in the future.
Given the new 6 month release cycle of Java, Java 9 and Java 10 will no longer be "current" when Apache Cassandra 4.0 comes out. Plus, the tests which run during the build have shown to be picky about which version of Java they work with. Therefore, the decision was made to go support Java 8 and 11 for 4.0, as work on Java 9 and 10 seemed to be lower-priority.
That's not to say that Cassandra 4.0 won't run on Java 9 or 10. In fact, CASSANDRA-9608 even has a patch submitted which should cover it. But the fact remains that Java 8 is included due to its longstanding use in the Cassandra user base. Java 11 will be the current JDK/JRE at the time 4.0 releases. If you want to be sure that your cluster will run well, I'd pick one of those two.
But until 4.0, the most recent patch of Java 8 is really the only option.
| Cassandra | 52,334,649 | 13 |
I'm trying to explore the database and want to see all tables that exist there. What's the command that's equivalent to SHOW TABLES; in SQL?
| to get all table names : DESC tables;
to get detailed view of a table : DESC table <table_name>;
to get detailed view of all tables in a keyspace : DESC keyspace <keyspace_name>;
| Cassandra | 51,274,364 | 13 |
I read this from the official DSE doc but it did not go in depth in to how. Can someone explain or provide any links to how?
| It's better to look into architecture guide for this kind of information.
There are multiple places that could be considered as some kind of load balancers. First - you can send requests to any node in the cluster, and this node will work as "coordinator", re-sending the request to the nodes that actually owns the data. Because this is not very optimal, drivers provides so-called token-aware load balancing policy, where driver is able to infer from data, which nodes are responsible for handling them, and send request to one of the nodes, selected based on other information (contributed by other load balancing policies).
In case of the multiple data centers, drivers & Cassandra itself, are able to send requests to "remote" DCs if "local" isn't available (notion of remote & local are specific to consumers). But in this case, some other factors will play their role - for example, if you have LOCAL_ consistency levels, then your requests won't be sent to "remote" data center.
Talking about application design - you may use load balancer before your application layer that will connect to Cassandra cluster in their "local" data center, and use LOCAL_ consistency levels to perform their operations. In case of downtime of one of the DCs, the load balancer should stop to send traffic to application layer in that DC.
| Cassandra | 50,400,961 | 13 |
I'm modeling my table for Cassandra 3.0+. The objective is to build a table that store user's activities, here what i've done so far:
(userid come from another database Mysql)
CREATE TABLE activity (
userid int,
type int,
remoteid text,
time timestamp,
imported timestamp,
visibility int,
title text,
description text,
img text,
customfields MAP<text,text>,
PRIMARY KEY (userid, type, remoteid, time, imported))
This are the main queries that i use:
SELECT * FROM activity WHERE userid = ? AND remoteid = ?;
SELECT * FROM activity WHERE userid = ? AND type = ? AND LIMIT 10;
Now i need to add the column visibility on the second query. So, from what i've learned around, i can choose between a secondary index or a materialized view.
This are the facts:
Here i've one partition per user and inside there are thousands of rows (activities).
I use always the partition key (userid) in all my query to access the data.
the global number of activities are 30 millions, growing up.
visibility column has low cardinality (just 3 value) and could be updated, but rarely.
So what should i choose? materialized view or index? I know that index with low cardinality are bad choice, but my query include always the partition key and a limit, so maybe is not that bad.
| If you are always going to use the partition key I recommend using secondary indexes.
Materialized views are better when you do not know the partition key
References:
Principal Article!
• Cassandra Secondary Index Preview #1
Here is a comparison with the Materialized Views and the secondary indices
• Materialized View Performance in Cassandra 3.x
And here is where the PK is known is more effective to use an index
• Cassandra Native Secondary Index Deep Dive
| Cassandra | 42,158,945 | 13 |
If I am not wrong, one can connect to a Cassandra cluster knowing at least one of the nodes that is in the cluster, and then the others can be discovered.
Lets say I have three nodes (1, 2 and 3) and I connect to those nodes like this:
Cluster.builder().addContactPoints("1,2,3".split(",")).build();
Then, if node 3 for example goes down, and the IP cannot be resolved, this line of code will throw an IllegalArgumentException as stated in the docs:
@throws IllegalArgumentException if no IP address for at least one of {@code addresses} could be found
Why would anyone want this behavior? I mean, if one of the nodes is down, I want the app to be able to run, as the Cassandra is still working fine.
I have checked this Cassandra Java driver: how many contact points is reasonable?
but that does not answer my question as it doesn't say anything about hosts than can't be reachable.
How should I handle this? Maybe this is changed in another version of the java driver? I am currently using cassandra-driver-core-3.0.3
| This validation is only to make sure that all the provided hosts can be resolved, it doesn't even check if a Cassandra server is running on each host. So it is basically to ensure that you did not do any typos while providing the hosts as indeed it doesn't assume that it could be a normal use case to have a provided host that cannot be resolved.
As workaround in your case (host been removed from the DNS entries), you could simply call the method addContactPoint(String address) explicitly instead of using addContactPoints(String... addresses) (which behind the scene simply call addContactPoint(String address) for each provided address) and manage the exception by yourself.
The code could be something like this:
Cluster.Builder builder = Cluster.builder();
// Boolean used to check if at least one host could be resolved
boolean found = false;
for (String address : "1,2,3".split(",")) {
try {
builder.addContactPoint(address);
// One host could be resolved
found = true;
} catch (IllegalArgumentException e) {
// This host could not be resolved so we log a message and keep going
Log.log(
Level.WARNING,
String.format("The host '%s' is unknown so it will be ignored", address)
);
}
}
if (!found) {
// No host could be resolved so we throw an exception
throw new IllegalStateException("All provided hosts are unknown");
}
Cluster cluster = builder.build();
FYI: I've just created a ticket to propose an improvement in the Java driver https://datastax-oss.atlassian.net/browse/JAVA-1334.
| Cassandra | 39,727,744 | 13 |
Using Cassandra, I want to create keyspace and tables dynamically using Spring Boot application. I am using Java based configuration.
I have an entity annotated with @Table whose schema I want to be created before application starts up since it has fixed fields that are known beforehand.
However depending on the logged in user, I also want to create additional tables for those user dynamically and be able to insert entries to those tables.
Can somebody guide me to some resources that I can make use of or point me in right direction in how to go about solving these issues. Thanks a lot for help!
| The easiest thing to do would be to add the Spring Boot Starter Data Cassandra dependency to your Spring Boot application, like so...
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
<version>1.3.5.RELEASE</version>
</dependency>
In addition, this will add the Spring Data Cassandra dependency to your application.
With Spring Data Cassandra, you can configure your application's Keyspace(s) using the CassandraClusterFactoryBean (or more precisely, the subclass... CassandraCqlClusterFactoryBean) by calling the setKeyspaceCreations(:Set) method.
The KeyspaceActionSpecification class is pretty self-explanatory. You can even create one with the KeyspaceActionSpecificationFactoryBean, add it to a Set and then pass that to the setKeyspaceCreations(..) method on the CassandraClusterFactoryBean.
For generating the application's Tables, you essentially just need to annotate your application domain object(s) (entities) using the SD Cassandra @Table annotation, and make sure your domain objects/entities can be found on the application's CLASSPATH.
Specifically, you can have your application @Configuration class extend the SD Cassandra AbstractClusterConfiguration class. There, you will find the getEntityBasePackages():String[] method that you can override to provide the package locations containing your application domain object/entity classes, which SD Cassandra will then use to scan for @Table domain object/entities.
With your application @Table domain object/entities properly identified, you set the SD Cassandra SchemaAction to CREATE using the CassandraSessionFactoryBean method, setSchemaAction(:SchemaAction). This will create Tables in your Keyspace for all domain object/entities found during the scan, providing you identified the proper Keyspace on your CassandraSessionFactoryBean appropriately.
Obviously, if your application creates/uses multiple Keyspaces, you will need to create a separate CassandraSessionFactoryBean for each Keyspace, with the entityBasePackages configuration property set appropriately for the entities that belong to a particular Keyspace, so that the associated Tables are created in that Keyspace.
Now...
For the "additional" Tables per user, that is quite a bit more complicated and tricky.
You might be able to leverage Spring Profiles here, however, profiles are generally only applied on startup. If a different user logs into an already running application, you need a way to supply additional @Configuration classes to the Spring ApplicationContext at runtime.
Your Spring Boot application could inject a reference to a AnnotationConfigApplicationContext, and then use it on a login event to programmatically register additional @Configuration classes based on the user who logged into the application. You need to follow your register(Class...) call(s) with an ApplicationContext.refresh().
You also need to appropriately handle the situation where the Tables already exist.
This is not currently supported in SD Cassandra, but see DATACASS-219 for further details.
Technically, it would be far simpler to create all the possible Tables needed by the application for all users at runtime and use Cassandra's security settings to restrict individual user access by role and assigned permissions.
Another option might be just to create temporary Keyspaces and/or Tables as needed when a user logs in into the application, drop them when the user logs out.
Clearly, there are a lot of different choices here, and it boils down more to architectural decisions, tradeoffs and considerations then it does technical feasibility, so be careful.
Hope this helps.
Cheers!
| Cassandra | 37,352,689 | 13 |
In particular I was looking at this page where it says:
If lightweight transactions are used to write to a row within a partition, only lightweight transactions for both read and write operations should be used.
I'm confused as to what using LWTs for read operations looks like. Specifically how this relates to per-query consistency (and serialConsistency) levels.
The description for SERIAL read consistency raises further questions:
Allows reading the current (and possibly uncommitted) state of data without proposing a new addition or update.
That suggests that using SERIAL for reads is not "using a LWT".
But then
How does Cassandra know to check for in-progress transactions when you do a read?
What is the new update that is proposed while you're trying to read, and how does this affect the read?
How would that work if the consistency you're reading at (say ONE for example) is less than the serialConsistency used for writing?
Once you use a LWT on a table (or row?, or column?), are all non-SERIAL reads forced to take the penalty of participating in quorums and the transaction algorithm?
Does the requirement actually apply to the whole row, or just the columns involved in the conditional statement?
If I ignore this advice and make both serial and non-serial reads/writes. In what way to the LWTs fail?
|
How does Cassandra know to check for in-progress transactions when you
do a read?
This is exactly what the SERIAL consistency level indicates. It makes sure that a query will only return results after all pending transactions have been fully executed.
What is the new update that is proposed while you're trying to read,
and how does this affect the read?
I think what the doc is trying to say is that the read will be handled just like a LWT - just without make any updates on it's own.
How would that work if the consistency you're reading at (say ONE for example) is less than the serialConsistency used for writing?
Reads using SERIAL will always imply QUORUM as consistency level. Reading with ONE will not provide you any guarantees provided by SERIAL and you can end up reading stalled data.
Once you use a LWT on a table (or row?, or column?), are all non-SERIAL reads forced to take the penalty of participating in quorums and the transaction algorithm?
No. You can use non-SERIAL consistency levels for your queries and have them executed with the exact same performance characteristics as any other non-serial queries.
Does the requirement actually apply to the whole row, or just the columns involved in the conditional statement?
No, I think you should be fine as long as you use different columns for serial reads/writes (including conditions) and regular reads/writes.
If I ignore this advice and make both serial and non-serial reads/writes. In what way to the LWTs fail?
If you execute regular writes, not being executed as part of a LWT, those writes will be applied at any time, without interfering at all with the consensus process of LWTs. As a consequence, regular writes can in theory change a value that is part of a LWT condition at a time between evaluating the condition and applying the update, which is a potential cause for inconsistencies you wanted to avoid using LWTs.
| Cassandra | 34,790,674 | 13 |
How should I check for an empty resultset using datastax java cassandra driver?
Suppose I'm executing the following query "SELECT * FROM my_table WHERE mykey=something"
there is a great chance that the query will not be matched. The following code does not work:
if (rs != null)
rs.one().getString("some_column");
| You were pretty close, the correct solution is:
Row r = rs.one();
if (r != null)
r.getString("some_column");
The driver will always return a result set, whether or not there were any returned results. The documentation for one() states that if no rows were returned rs.one() returns null.
You can also use getAvailableWithoutFetching() which returns the number of rows in the result set without fetching more rows. Since pageSize has to be >= 1, you can be assured if there are is at least 1 row this will always return a value greater than 0.
| Cassandra | 29,471,658 | 13 |
I'm trying to replicate a SQL database in Cassandra, but, while I had no problem creating the tables, I found that I cannot find an example easy to understand that shows how I can create foreign keys in Cassandra.
So, If I have this in SQL:
CREATE TABLE COOP_USUARIO (
CI VARCHAR2 (13 BYTE) NOT NULL ,
CUENTA VARCHAR2 (20 BYTE) NOT NULL ,
NOMBRE VARCHAR2 (50 BYTE) NOT NULL ,
EMAIL VARCHAR2 (255 BYTE) NOT NULL ,
DIRECCION VARCHAR2 (255 BYTE) ,
CIUDAD NUMBER NOT NULL ,
TELEFONO VARCHAR2 (10 BYTE) NOT NULL ,
TIPO_PERSONA NUMBER (1) NOT NULL ,
);
CREATE UNIQUE INDEX COOP_USUARIO_PK ON COOP_USUARIO(
CI ASC
);
ALTER TABLE COOP_USUARIO ADD CONSTRAINT COOP_USUARIO_PK PRIMARY KEY ( CI ) ;
CREATE TABLE COOP_CIUDADES
(
ID NUMBER NOT NULL ,
NOMBRE VARCHAR2 (25 BYTE) NOT NULL ,
PROVINCIA NUMBER NOT NULL
) ;
CREATE UNIQUE INDEX COOP_CIUDADES_PK ON COOP_CIUDADES
(
ID ASC
);
ALTER TABLE COOP_CIUDADES ADD CONSTRAINT COOP_CIUDADES_PK PRIMARY KEY ( ID ) ;
ALTER TABLE COOP_USUARIO ADD CONSTRAINT COOP_USUARIO_CIUDADES_FK FOREIGN KEY ( CIUDAD ) REFERENCES COOP_CIUDADES ( ID ) ;
What is the Cassndra CQL code for the same purpose?
| Simple answer is: There is no CQL code for the same purpose.
CQL does not have a concept of foreign keys or any concept of constraints between tables in the same way that you can't do joins between tables.
If you need a constraint between tables then you would need to handle this in code.
| Cassandra | 27,676,995 | 13 |
I have a date column in a Cassandra column family. When I retrieve data from this CF using datastax java API, this date object can be taken as a java.util.Date object.
It has a getYear() method but it is deprecated. The corresponding javadoc says:
As of JDK version 1.1, replaced by Calendar.get(Calendar.YEAR) - 1900.
How can I get the year, month, day attributes from this date object properly?
| Could you try like tihs;
// create a calendar
Calendar cal = Calendar.getInstance();
cal.setTime(datetime); //use java.util.Date object as arguement
// get the value of all the calendar date fields.
System.out.println("Calendar's Year: " + cal.get(Calendar.YEAR));
System.out.println("Calendar's Month: " + cal.get(Calendar.MONTH));
System.out.println("Calendar's Day: " + cal.get(Calendar.DATE));
As mentioned in javadocs;
@Deprecated public int getYear() Deprecated. As of JDK version 1.1,
replaced by Calendar.get(Calendar.YEAR) - 1900. Returns a value that
is the result of subtracting 1900 from the year that contains or
begins with the instant in time represented by this Date object, as
interpreted in the local time zone. Returns: the year represented by
this date, minus 1900.
| Cassandra | 27,489,980 | 13 |
I read that in nosql (cassandra for instance) data is often stored denormalized. For instance see this SO answer or this website.
An example is if you have a column family of employees and departments and you want to execute a query: select * from Emps where Birthdate = '25/04/1975'
Then you have to make a column family birthday_Emps and store the ID of each employee as a column. So then you can query the birthday_Emps family for the key '25/04/1975' and instantly get all the ID's of the employees born on that date. You can even denormalize the employee details into birthday_Emps as well so that you also instantly have the employee names.
Is this really the way to do it?
Whenever an employee is deleted or inserted then you will have to remove the employee from birthday_Emps too. And in another example someone even said that sometimes you have a situation where one delete in some table requires like 100's of deletes in other tables. Is this really common to do?
Is it common to do joins in application code? Do you have software that allows you create pre-written applications to join together data from different queries?
Are there best practices, patterns, etc for handling these data model questions?
| "Yes" for the most part, taking an approach of query-based data modeling really is the best way to do it.
That is still a good idea to do, because the speed of your query times make it worth it. Yes, there's a little more housecleaning to do. I haven't had to execute 100s of deletes from other column families, but occasionally there is some complicated clean-up to do. But, you shouldn't be doing a whole lot of deleting in Cassandra anyway (anti-pattern).
No. Client-side JOINs are just as bad as distributed JOINs. The whole idea is to create a table to return data for each specific query...denormalized and/or replicated...and thus negating the need to do a JOIN at all. The exception to this, is if you are running OLAP queries for analysis, you can use a tool like Apache Spark to execute an ad-hoc, distributed JOIN. But it's definitely not something you'd want to do on a production system.
A few articles I can recommend:
Getting Started with Cassandra Time Series Data Modeling - Written by DataStax's Chief Evangelist Patrick McFadin, it covers one of the more common Cassandra use cases in a few different ways.
Escaping From Disco-Era Data Modeling - This one talks about some of the obstacles that beginners with Cassandra can face, as well as the general approach to take in overcoming them. Disclaimer: I am the author.
Cassandra Data Modeling Best Practices, Part 1 - You can't go wrong with Jay Patel's (eBay) classic article on Cassandra modeling practices. It's a little dated in that the examples are grounded in the pre-CQL world, but the techniques still resonate.
| Cassandra | 27,281,536 | 13 |
I get the above error when I try to use following cql statement, not sure whats wrong with it.
CREATE TABLE Stocks(
id uuid,
market text,
symbol text,
value text,
time timestamp,
PRIMARY KEY(id)
) WITH CLUSTERING ORDER BY (time DESC);
Bad Request: Only clustering key columns can be defined in CLUSTERING ORDER directive
But this works fine, can't I use some column which is not part of primary key to arrange my rows ?
CREATE TABLE timeseries (
... event_type text,
... insertion_time timestamp,
... event blob,
... PRIMARY KEY (event_type, insertion_time)
... )
... WITH CLUSTERING ORDER BY (insertion_time DESC);
| "can't I use some column which is not part of primary key to arrange my rows?"
No, you cannot. From the DataStax documentation on the SELECT command:
ORDER BY clauses can select a single column only. That column has to be the second column in a compound PRIMARY KEY. This also applies to tables with more than two column components in the primary key.
Therefore, for your first CREATE to work, you will need to adjust your PRIMARY KEY to this:
PRIMARY KEY(id,time)
The second column of in a compound primary key is known as the "clustering column." This is the column that determines the on-disk sort order of data within a partitioning key. Note that last part in italics, because it is important. When you query your Stocks column family (table) by id, all "rows" of column values for that id will be returned, sorted by time. In Cassandra you can only specify order within a partitioning key (and not for your entire table), and your partitioning key is the first key listed in a compound primary key.
Of course the problem with this, is that you probably want id to be unique (which means that CQL will only ever return one "row" of column values per partitioning key). Requiring time to be part of the primary key negates that, and makes it possible to store multiple values for the same id. This is the problem with partitioning your data by a unique id. It might be a good idea in the RDBMS world, but it can make querying in Cassandra more difficult.
Essentially, you are going to need to revisit your data model here. For instance, if you wanted to query prices over time, you could name the table something like "StockPriceEvents" with a primary key of (id,time) or (symbol,time). Querying that table would give you the prices recorded for each id or symbol, sorted by time. Now that may or may not be of any value to your use case. Just trying to explain how primary keys and sort order work in Cassandra.
Note: You should really use column names that have more meaning. Things like "id," "time," and "timeseries" are pretty vague don't really describe anything about the context in which they are used.
| Cassandra | 27,235,061 | 13 |
I want to verify that rows are getting added to the table. What cql statement would show the last n rows from the table below?
Table description below:
cqlsh:timeseries> describe table option_data;
CREATE TABLE option_data (
ts bigint,
id text,
strike decimal,
callask decimal,
callbid decimal,
maturity timestamp,
putask decimal,
putbid decimal,
PRIMARY KEY ((ts), id, strike)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.000000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
cqlsh:timeseries>
| You didn't specify last n "by what".
To get the last N per id:
SELECT * FROM option_data WHERE ts=1 ORDER BY id DESC LIMIT N;
ORDER BY clause can only be applied to the second column in a compound primary key. If you need to query by time you will need to think about your data model a little more.
If your queries are most often "last N", you might consider writing something like this:
CREATE TABLE time_series (
id text,
t timeuuid,
data text,
PRIMARY KEY (id, t)
) WITH CLUSTERING ORDER BY (t DESC)
... where 'id' is your time series id. The CLUSTERING ORDER reverses the order of timeuuid 't', causing the cells to be stored in a natural order for your query.
With this, you would get the last five events as follows:
SELECT * FROM time_series WHERE id='stream id' LIMIT 5;
There is a lot of information out there for time series in Cassandra. I suggest reading some of the more recent articles on the matter.
| Cassandra | 26,168,859 | 13 |
I am writing an application and I need to be able to tell if an inserts and updates succeed. I am using "INSERT ... IF NOT EXISTS" to get the light weight transaction behavior and noticed that the result set returned from execute, contains a row with updated data and an "[applied]" column that can be queried. That is great. But I have an update statement that is returning an empty ResultSet. It appears as though the update is succeeding but I want an programatic way to verify that.
To Clarify:
I have turned on some logging of the result sets returned by my mutations. I have found that "INSERT...IF NOT EXIST"s returns a ResultSet with a boolean column named "[applied]". If "[applied]" is false, it also returns the row that already exits.
With UPDATE, I always see an empty ResultSet.
So I have two questions:
Where is the documentation on what the ResultSet should contain for each type of mutation? I did not see it in the CQL docs or in the Java Driver docs. I even tried looking at other language integrations' docs and did not find any description of the ResultSet contents for mutations.
Is there any way to find out how many rows were modified by an UPDATE or deleted by a DELETE?
| In Cassandra insert/update/delete behave the same and they are called mutations. If your client is not returning any exceptions, then the mutation is done.
If you are concerned about consistency of your mutation calls, then add USING CONSISTENCY with higher levels.
http://www.datastax.com/docs/1.0/references/cql/index
http://www.datastax.com/docs/1.1/dml/data_consistency
If you are after good consistency, I recommend using LOCAL_QUORUM for both reads and mutations. That way you don't have to worry about programmatically check a mutation because it will require a consequent read.
| Cassandra | 21,147,871 | 13 |
Is there a CQL query to list all existing indexes for particular key space, or column family?
| You can retrieve primary keys and secondary indexes using the system keyspace:
SELECT column_name, index_name, index_options, index_type, component_index
FROM system.schema_columns
WHERE keyspace_name='samplekp'AND columnfamily_name='sampletable';
Taking, for example, the following table declaration:
CREATE TABLE sampletable (
key text,
date timestamp,
value1 text,
value2 text,
PRIMARY KEY(key, date));
CREATE INDEX ix_sample_value2 ON sampletable (value2);
The query mentioned above would get something this results:
column_name | index_name | index_options | index_type | component_index
-------------+------------------+---------------+------------+-----------------
date | null | null | null | 0
key | null | null | null | null
value1 | null | null | null | 1
value2 | ix_sample_value2 | {} | COMPOSITES | 1
| Cassandra | 21,092,524 | 13 |
Can a primary key in Cassandra contain a collection column?
Example:
CREATE TABLE person (
first_name text,
emails set<text>,
description text
PRIMARY KEY (first_name, emails)
);
| Collection types cannot be part of the primary key, and neither can the counter type. You can easily test this yourself, but the reason might not be obvious.
Sets, list, maps are hacks on top of the storage model (but I don’t mean that in a negative way). A set is really just a number of columns with the same key prefix. To be a part of the primary key the value must be scalar, and the collection types aren’t.
| Cassandra | 16,470,776 | 13 |
I need to check if certain keyspace exists in Cassandra database. I need to write smth like this:
if (keyspace KEYSPACE_NAME not exists) create keyspace KEYSPACE_NAME;
There's a command describe keyspace, but can I somehow retrieve information from it in cql script?
| Just providing fresh information. As of CQL3 while creating a keyspace you can add if statement like this
CREATE KEYSPACE IF NOT EXISTS Test
WITH replication = {'class': 'SimpleStrategy',
'replication_factor' : 3}
| Cassandra | 9,656,371 | 13 |
Here's a sample of the scenario I'm facing. Say I have this column family:
create column family CompositeTypeCF
with comparator = 'CompositeType(IntegerType,UTF8Type)'
and key_validation_class = 'UTF8Type'
and default_validation_class = 'UTF8Type'
Here's some sample Java code using Hector as to how I'd go about inserting some data into this column family:
Cluster cluster = HFactory.getOrCreateCluster("Test Cluster", "192.168.1.6:9160");
Keyspace keyspaceOperator = HFactory.createKeyspace("CompositeTesting", cluster);
Composite colKey1 = new Composite();
colKey1.addComponent(1, IntegerSerializer.get());
colKey1.addComponent("test1", StringSerializer.get());
Mutator<String> mutator = HFactory.createMutator(keyspaceOperator, StringSerializer.get());
Mutator<String> addInsertion = mutator.addInsertion("rowkey1", "CompositeTypeCF",
HFactory.createColumn(colKey1, "Some Data", new CompositeSerializer(), StringSerializer.get()));
mutator.execute();
This works, and if I go to the cassandra-cli and do a list I get this:
$ list CompositeTypeCF;
Using default limit of 100
-------------------
RowKey: rowkey1
=> (column=1:test1, value=Some Data, timestamp=1326916937547000)
My question now is this: How do I go about querying this data in Hector? Basically I would need to query it in a few ways:
Give me the whole row where Row Key = "rowkey1"
Give me the column data where the first part of the column name = some integer value
Give me all the columns where the first part of the column name is within a certain range
| Good starting point tutorial here.
But, after finally having the need to use a composite component and attempting to write queries against the data, I figured out a few things that I wanted to share.
When searching Composite columns, the results will be a contiguous block of columns.
So, assuming a s composite of 3 Strings, and my columns look like:
A:A:A
A:B:B
A:B:C
A:C:B
B:A:A
B:B:A
B:B:B
C:A:B
For a search from A:A:A to B:B:B, the results will be
A:A:A
A:B:B
A:B:C
A:C:B
B:A:A
B:B:A
B:B:B
Notice the "C" Components? There are no "C" components in the start/end terms! what gives? These are all the results between A:A:A and B:B:B columns. The Composite search terms do not give the results as if processing nested loops (this is what I originally thought), but rather, since the columns are sorted, you are specifying the start and end terms for a contiguous block of columns.
When building the Composite search entries, you must specify the ComponentEquality
Only the last term should be GREATER_THAN_EQUAL, all the others should be EQUAL. e.g. for above
Composite start = new Composite();
start.addComponent(0, "A", Composite.ComponentEquality.EQUAL);
start.addComponent(1, "A", Composite.ComponentEquality.EQUAL);
start.addComponent(2, "A", Composite.ComponentEquality.EQUAL);
Composite end = new Composite();
end.addComponent(0, "B", Composite.ComponentEquality.EQUAL);
end.addComponent(1, "B", Composite.ComponentEquality.EQUAL);
end.addComponent(2, "B", Composite.ComponentEquality.GREATER_THAN_EQUAL);
SliceQuery<String, Composite, String> sliceQuery = HFactory.createSliceQuery(keyspace, se, ce, se);
sliceQuery.setColumnFamily("CF").setKey(myKey);
ColumnSliceIterator<String, Composite, String> csIterator = new ColumnSliceIterator<String, Composite, String>(sliceQuery, start, end, false);
while (csIterator.hasNext()) ....
| Cassandra | 8,916,820 | 13 |
Does anyone have advice on using cassandra with scala? There is no native scala-cassandra client supporting cassandra version 8.0+, so I have to use hector, and it seems to work OK but not to be concise. Do you have any attempts, recommendations or any wrapper code,.. etc for hector ?
| The official Scala driver for Apache Cassandra and Datastax Enterprise, with full support for CQL 3.0, is phantom.
Phantom was developed at Outworkers, official Datastax partners, explicitly to superseed all other drivers. It's being actively developed and maintained, with full support for all the newest Cassandra features.
Disclaimer: I am the project lead on phantom, and a result may be possibly biased in my recommendation. We offer more in depth feature comparisons on the phantom wiki.
| Cassandra | 6,382,763 | 13 |
Anyone familiar enough with the Cassandra engine (via PHP using phpcassa lib) to know offhand whether there's a corollary to the sql-injection attack vector? If so, has anyone taken a stab at establishing best practices to thwart them? If not, would anyone like to ; )
| No. The Thrift layer used by phpcassa is an rpc framework, not based on string parsing.
| Cassandra | 5,998,838 | 13 |
Like many these days, I am an old relational-model user approaching Cassandra for the first time. I have been trying to understand Cassandra's data model, and when I read about it I frequently encounter statements that encourage me to think about it as 4 and 5 dimensional maps.
Now I'm familiar with an ordinary key/value Map, but I have never thought of how many dimensions it has, and that gives me no basis to plunge headlong into trying to visualize 4 and 5 dimensions.
Is there a more gentle introduction to dimensionality in maps? How many dimensions are there in an ordinary hashtable? One? Two? Zero?
If an ordinary hashtable has, say, just one dimension, then what would a two-dimensional map be? If two, then what would a 3-dimensional map be?
| Map<String, String> -- One dimension
Map<String, Map<String, String>> -- Two dimensions
Map<String, Map<String, Map<String,String>>> -- Three dimensions
etc...
| Cassandra | 5,719,437 | 13 |
I'm running a 4-node Cassandra cluster. Some of our nodes have some very large snapshots, and we're running out of disk space. I need to delete the snapshots, but I can't find any documentation which states how to do this properly. Do I just shut the node down and delete the files in the snapshots directory? Is there some kind of command? Thanks.
| OK I figured this one out (with the help of IRC). It's nodetool -h localhost clearsnapshot.
| Cassandra | 5,554,710 | 13 |
Has anyone had any success with connecting to a Cassandra cluster using DBeaver Community Edition? I've tried to follow this post, but haven't had any success. I have to have authentication enabled, and I get an error saying:
Authentication error on host /x.x.x.x:9042: Host /x.x.x.x:9042 requires authentication, but no authenticator found in Cluster configuration
| IMPORTANT UPDATE
The Simba JDBC driver from Magnitude is no longer available for free. It is no longer downloadable from the DataStax website so the instructions in this post is obsolete.
The alternative option is to use ING Bank's open-source JDBC wrapper and I have documented the steps for using it with DBeaver Community Edition on DBA Stack Exchange (post #340409).
Overview
DataStax offers the JDBC driver from Magnitude (formerly Simba) to users at no cost so you should be able to use it with DBeaver.
These are the high-level steps for connecting to a Cassandra cluster with DBeaver:
Download the Simba JDBC driver from DataStax
Import the Simba JDBC driver
Create a new connection to your cluster
Download the driver
Go to https://downloads.datastax.com/#odbc-jdbc-drivers.
Select Simba JDBC Driver for Apache Cassandra.
Select JDBC 4.2.
Accept the license terms (click the checkbox).
Hit the blue Download button.
Once the download completes, unzip the downloaded file.
Import the driver
In DBeaver, go to the Driver Manager and import the Simba JDBC driver as follows:
Click the New button
In the Libraries tab, click the Add File button
Locate the directory where you unzipped the download and add the CassandraJDBC42.jar file.
Click the Find Class button which should identify the driver class as com.simba.cassandra.jdbc42.Driver.
In the Settings tab, set the following:
Driver Name: Cassandra
Driver Type: Generic
Class Name: com.simba.cassandra.jdbc42.Driver
URL Template: jdbc:cassandra://{host}[:{port}];AuthMech=1 (set authentication mechanism to 0 if your cluster doesn't have authentication enabled)
Default Port: 9042
Click the OK button to save the driver.
At this point, you should see Cassandra as one of the drivers in the list.
Connect to your cluster
In DBeaver, create a new database connection as follows:
Select Cassandra from the drivers list.
In the Main tab of the JDBC connection settings, set the following:
Host: node_ip_address (this could be any node in your cluster)
Port: 9042 (or whatever you've set as rpc_port in cassandra.yaml)
Username: your_db_username
Password: your_db_password
Click on the Test Connection button to confirm that the driver configuration is working.
Click on the Finish button to save the connection settings.
At this point, you should be able to browse the keyspaces and tables in your Cassandra cluster. Cheers!
👉 Please support the Apache Cassandra community by hovering over cassandra then click on the Watch tag button. 🙏 Thanks!
| Cassandra | 69,027,126 | 12 |
I have a Cassandra node running in a Docker container and I want to launch a CQL script when the database is ready. I tried checking the port to detect when it's ready :
while ! nc -z localhost 7199; do
sleep 1
done
echo "Cassandra is ready"
cqlsh -f ./createTables.cql
But the port is opened before the database is really ready, and the cqlsh therefore fails. How to properly check the Cassandra status and launch the script ?
| First, you need to wait on another port: 9042. This is the port that is used by CQLSH.
Another approach could be also waiting for execution of cqlsh instead of nc (or as a second step, because nc is much faster to execute). For example, you can use something like these commands:
while ! cqlsh -e 'describe cluster' ; do
sleep 1
done
to wait until Cassandra is ready...
| Cassandra | 48,034,869 | 12 |
Using Java, can I scan a Cassandra table and just update the TTL of a row? I don't want to change any data. I just want to scan Cassandra table and set TTL of a few rows.
Also, using java, can I set TTL which is absolute. for example (2016-11-22 00:00:00). so I don't want to specify the TTL in seconds, but specify the absolute value in time.
| Cassandra doesn't allow to set the TTL value for a row, it allows to set TTLs for columns values only.
In the case you're wondering why you are experiencing rows expiration, this is because if all the values of all the columns of a record are TTLed then the row disappears when you try to SELECT it.
However, this is only true if you perform an INSERT with the USING TTL. If you INSERT without TTL and then do an UPDATE with TTL you'll still see the row, but with null values. Here's a few examples and some gotchas:
Example with a TTLed INSERT only:
CREATE TABLE test (
k text PRIMARY KEY,
v int,
);
INSERT INTO test (k,v) VALUES ('test', 1) USING TTL 10;
... 10 seconds after...
SELECT * FROM test ;
k | v
---------------+---------------
Example with a TTLed INSERT and a TTLed UPDATE:
INSERT INTO test (k,v) VALUES ('test', 1) USING TTL 10;
UPDATE test USING TTL 10 SET v=0 WHERE k='test';
... 10 seconds after...
SELECT * FROM test;
k | v
---------------+---------------
Example with a non-TTLed INSERT with a TTLed UPDATE
INSERT INTO test (k,v) VALUES ('test', 1);
UPDATE test USING TTL 10 SET v=0 WHERE k='test';
... 10 seconds after...
SELECT * FROM test;
k | v
---------------+---------------
test | null
Now you can see that the only way to solve you problem is to rewrite all the values of all the columns of your row with a new TTL.
In addition, there's no way to specify an explicit expiration date, but you can get a simple TTL value in seconds with simple math (as other suggested).
Have a look at the official documentation about data expiration. And don't forget to have a look at the DELETE section for updating TTLs.
HTH.
| Cassandra | 40,730,510 | 12 |
I am trying to calculate the the partition size for each row in a table with arbitrary amount of columns and types using a formula from the Datastax Academy Data Modeling Course.
In order to do that I need to know the "size in bytes" for some common Cassandra data types. I tried to google this but I get a lot of suggestions so I am puzzled.
The data types I would like to know the byte size of are:
A single Cassandra TEXT character (I googled answers from 2 - 4 bytes)
A Cassandra DECIMAL
A Cassandra INT (I suppose it is 4 bytes)
A Cassandra BIGINT (I suppose it is 8 bytes)
A Cassandra BOOELAN (I suppose it is 1 byte, .. or is it a single bit)
Any other considerations would of course also be appreciated regarding data types sizes in Cassandra.
Adding more info since it seems confusing to understand that I am only trying to estimate the "worst scenario disk usage" the data would occupy with out any compressions and other optimizations done by Cassandra behinds the scenes.
I am following the Datastax Academy Course DS220 (see link at end) and implement the formula and will use the info from answers here as variables in that formula.
https://academy.datastax.com/courses/ds220-data-modeling/physical-partition-size
| I think, from a pragmatic point of view, that it is wise to get a back-of-the-envelope estimate of worst case using the formulae in the ds220 course up-front at design time. The effect of compression often varies depending on algorithms and patterns in the data. From ds220 and http://cassandra.apache.org/doc/latest/cql/types.html:
uuid: 16 bytes
timeuuid: 16 bytes
timestamp: 8 bytes
bigint: 8 bytes
counter: 8 bytes
double: 8 bytes
time: 8 bytes
inet: 4 bytes (IPv4) or 16 bytes (IPV6)
date: 4 bytes
float: 4 bytes
int 4 bytes
smallint: 2 bytes
tinyint: 1 byte
boolean: 1 byte (hopefully.. no source for this)
ascii: equires an estimate of average # chars * 1 byte/char
text/varchar: requires an estimate of average # chars * (avg. # bytes/char for language)
map/list/set/blob: an estimate
hope it helps
| Cassandra | 40,087,926 | 12 |
cassandra cql shell window got disappears after installation in windows?
this was installed using MSI installer availalbe in planet cassandra.
Why this happens ? please help me..
Thanks in advance.
| I had the same issue with DataStax 3.9. This is how I sorted this:
Step 1: Open file: DataStax-DDC\apache-cassandra\conf\cassandra.yaml
Step 2: Uncomment the cdc_raw_directory and set new value to (for windows)
cdc_raw_directory: "C:/Program Files/DataStax-DDC/data/cdc_raw"
Step 3: Goto Windows Services and Start the DataStax DDC Server 3.9.0 Service
| Cassandra | 39,893,193 | 12 |
I know that Vnodes form many token ranges for each node by setting num_tokens in cassandra.yaml file.
say for example (a), i have 6 nodes, each node i have set num_token=256. How many virtual nodes are formed among these 6 nodes that is, how many virtual nodes or sub token ranges contained in each physical node.
According to my understanding, when every node has assigned num_token as 256, then it means that all the 6 nodes contain 256 vnodes each. Is this statement true? if not then, how vnodes form the range of tokens (obviously random) in each node. It would be really convenient if someone can explain me with the example mentioned as (a).
what is the Ring of Vnodes signify in this url:=> http://docs.datastax.com/en/cassandra/3.x/cassandra/images/arc_vnodes_compare.png (taken from: http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2 )
| Every partition key in Cassandra is converted to a numerical token value using the MurMur3 hash function. The token range is between -263 to +263-1.
num_token defines how many token ranges are assigned to a node. This is the same as the signed java long. Each node calculates 256 (num_tokens) random values in the token range and informs other nodes what they are, thus when a node needs to coordinate a request for a specific token it knows which nodes are responsible for it, according to the Replication Factor and DC/rack placement.
A better description for this feature would be "automatic token range assignment for better streaming capabilities", calling it "virtual" is a bit confusing.
In your case you have 6 nodes, each set with 256 token ranges so you have 6*256 token ranges and each psychical node contains 256 token ranges.
For example consider 2 nodes with num_tokens set to 4 and token range 0 to 100.
Node 1 calculates tokens 17, 35, 77, 92
Node 2 calculates tokens 4, 25, 68, 85
The ring shows the distribution of token ranges in this case.
Node 2 is responsible for token ranges 4-17, 25-35, 68-77, 85-92 and node 1 for the rest.
| Cassandra | 37,940,630 | 12 |
I have a large table with several columns. One of the columns is created_at, which is a timestamp.
Now I want to add a new column called last_activated. This column should initially be filled with the value of created_at.
At first I just altered the table to add the new column -> No problem there
but than I tried
UPDATE my_table SET last_activated = created_at;
what caused the following problem:
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message="line 1:54 no viable alternative at input ';' (update catalog_item set last_activated_at = [created_at];)">
Is this somehow possible to do?
I'm using cassandra 3.5.
| No its not possible. At first place where clause is must with UPDATE
| Cassandra | 37,795,916 | 12 |
Cassandra does not comply with ACID like RDBMS but CAP. So Cassandra picks AP out of CAP and leaves it to the user for tuning consistency.
I definitely cannot use Cassandra for core banking transaction because C* is slightly inconsistent.
But Cassandra writes are extremely fast which is good for OLTP.
I can use C* for OLAP because reads are extremely fast which is good for reporting too.
So i understood that C* is good only when your application do not need your data to be consistent for some amount of time but reads and writes should be quick?
If my understanding is right kindly list some applications?
| ACID are properties of relational databases where BASE are properties of most nosql databases and Cassandra is one of the. CAP theorem just explains the problem of consistency, availability and partition tolerance in distributed systems. Good thing about Cassandra is that it has tunable consistency so you can be pretty much consistent (at the price of partition tolerance) so OLTP is doable. As phact said there are even some banks that built their transaction software on top of Cassandra. OLAP is also doable but not with just Cassandra since its partitioned row storage limits its capabilities. You need to have something like Spark to be able to do complex queries required.
| Cassandra | 37,434,016 | 12 |
How do I collect these metrics on a console (Spark Shell or Spark submit job) right after the task or job is done.
We are using Spark to load data from Mysql to Cassandra and it is quite huge (ex: ~200 GB and 600M rows). When the task the done, we want to verify how many rows exactly did spark process? We can get the number from Spark UI, but how can we retrieve that number ("Output Records Written") from spark shell or in spark-submit job.
Sample Command to load from Mysql to Cassandra.
val pt = sqlcontext.read.format("jdbc").option("url", "jdbc:mysql://...:3306/...").option("driver", "com.mysql.jdbc.Driver").option("dbtable", "payment_types").option("user", "hadoop").option("password", "...").load()
pt.save("org.apache.spark.sql.cassandra",SaveMode.Overwrite,options = Map( "table" -> "payment_types", "keyspace" -> "test"))
I want to retrieve all the Spark UI metrics on the above task mainly Output size and Records Written.
Please help.
Thanks for your time!
| Found the answer. You can get the stats by using SparkListener.
If your job has no input or output metrics you might get None.get exceptions which you can safely ignore by providing if stmt.
sc.addSparkListener(new SparkListener() {
override def onTaskEnd(taskEnd: SparkListenerTaskEnd) {
val metrics = taskEnd.taskMetrics
if(metrics.inputMetrics != None){
inputRecords += metrics.inputMetrics.get.recordsRead}
if(metrics.outputMetrics != None){
outputWritten += metrics.outputMetrics.get.recordsWritten }
}
})
Please find the below example.
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import org.apache.spark.sql._
import org.apache.spark.storage.StorageLevel
import org.apache.spark.scheduler.{SparkListener, SparkListenerTaskEnd}
val conf = new SparkConf()
.set("spark.cassandra.connection.host", "...")
.set("spark.driver.allowMultipleContexts","true")
.set("spark.master","spark://....:7077")
.set("spark.driver.memory","1g")
.set("spark.executor.memory","10g")
.set("spark.shuffle.spill","true")
.set("spark.shuffle.memoryFraction","0.2")
.setAppName("CassandraTest")
sc.stop
val sc = new SparkContext(conf)
val sqlcontext = new org.apache.spark.sql.SQLContext(sc)
var outputWritten = 0L
sc.addSparkListener(new SparkListener() {
override def onTaskEnd(taskEnd: SparkListenerTaskEnd) {
val metrics = taskEnd.taskMetrics
if(metrics.inputMetrics != None){
inputRecords += metrics.inputMetrics.get.recordsRead}
if(metrics.outputMetrics != None){
outputWritten += metrics.outputMetrics.get.recordsWritten }
}
})
val bp = sqlcontext.read.format("jdbc").option("url", "jdbc:mysql://...:3306/...").option("driver", "com.mysql.jdbc.Driver").option("dbtable", "bucks_payments").option("partitionColumn","id").option("lowerBound","1").option("upperBound","14596").option("numPartitions","10").option("fetchSize","100000").option("user", "hadoop").option("password", "...").load()
bp.save("org.apache.spark.sql.cassandra",SaveMode.Overwrite,options = Map( "table" -> "bucks_payments", "keyspace" -> "test"))
println("outputWritten",outputWritten)
Result:
scala> println("outputWritten",outputWritten)
(outputWritten,16383)
| Cassandra | 36,898,511 | 12 |
I can't run describe keyspaces for some reason, even though I'm clearly connecting to my Cassandra 3.3 host via the 3.1 python driver. Some other commands seem to work fine.
Thanks in advance!
from cassandra.cluster import Cluster
cluster = Cluster(['192.168.1.53'])
#session = cluster.connect('node_data')
session = cluster.connect()
session.execute('USE node_data')
rows = session.execute('SELECT * FROM users')
session.execute('DESCRIBE KEYSPACES;')
---------------------------------------------------------------------------
SyntaxException Traceback (most recent call last)
<ipython-input-5-8b1f82917aa9> in <module>()
----> 1 session.execute('DESCRIBE KEYSPACES;')
2
/Users/natemarks/.virtualenvs/cassandra/lib/python2.7/site-packages/cassandra/cluster.so in cassandra.cluster.Session.execute (cassandra/cluster.c:27107)()
/Users/natemarks/.virtualenvs/cassandra/lib/python2.7/site-packages/cassandra/cluster.so in cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:60227)()
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message="line 1:0 no viable alternative at input 'DESCRIBE' ([DESCRIBE]...)">
| DESCRIBE is a cqlsh-specific command, so it is not supported by the drivers since it is not considered a CQL command. You can find a full listing of the cqlsh commands here.
Alternatively you can get at a keyspace's schema using the python-driver by accessing Cluster.metadata and then accessing the keyspaces dict.
| Cassandra | 35,986,136 | 12 |
In the introduction course of Cassandra DataStax they say that all of the clocks of a Cassandra cluster nodes, have to be synchronized, in order to prevent READ queries to 'old' data.
If one or more nodes are down they can not get updates, but as soon as they back up again - they would update and there is no problem...
So, why Cassandra cluster need synchronized clocks between nodes?
| In general it is always a good idea to keep your server clocks in sync, but a primary reason why clock sync is needed between nodes is because Cassandra uses a concept called 'Last Write Wins' to resolve conflicts and determine which mutation represents the most correct up-to date state of data. This is explained in Why cassandra doesn't need vector clocks.
Whenever you 'mutate' (write or delete) column(s) in cassandra a timestamp is assigned by the coordinator handling your request. That timestamp is written with the column value in a cell.
When a read request occurs, cassandra builds your results finding the mutations for your query criteria and when it sees multiple cells representing the same column it will pick the one with the most recent timestamp (The read path is more involved than this but that is all you need to know in this context).
Things start to become problematic when your nodes' clocks become out of sync. As I mentioned, the coordinator node handling your request assigns the timestamp. If you do multiple mutations to the same column and different coordinators are assigned, you can create some situations where writes that happened in the past are returned instead of the most recent one.
Here is a basic scenario that describes that:
Assume we have a 2 node cluster with nodes A and B. Lets assume an initial state where A is at time t10 and B is at time t5.
User executes DELETE C FROM tbl WHERE key=5. Node A coordinates the request and it is assigned timestamp t10.
A second passes and a User executes UPDATE tbl SET C='data' where key=5. Node B coordinates the request and it is assigned timestamp t6.
User executes the query SELECT C from tbl where key=5. Because the DELETE from Step 1 has a more recent timestamp (t10 > t6), no results are returned.
Note that newer versions of the datastax drivers will start defaulting to use Client Timestamps to have your client application generate and assign timestamps to requests instead of relying on the C* nodes to assign them. datastax java-driver as of 3.0 now defaults to client timestamps (read more about there in 'Client-side generation'). This is very nice if all requests come from the same client, however if you have multiple applications writing to cassandra you now have to worry about keeping your client clocks in sync.
| Cassandra | 34,898,693 | 12 |
I need to add some new columns to my existing column_family/table in cassandra.
I can add single column like this :
ALTER TABLE keyspace_name.table_name ADD column_name cql_type;
Can I add all new columns using a single query? If yes, how to do it using cql and datastax cassandra driver?
| This is fixed in Cassandra 3.6
https://issues.apache.org/jira/browse/CASSANDRA-10411
ALTER TABLE foo ADD (colname1 int, colname2 int)
| Cassandra | 34,607,181 | 12 |
I'm new in Cassandra and I've read that Cassandra encourages denormalization and duplication of data. This leaves me a little confused.
Let us imagine the following scenario:
I have a keyspace with four tables: A,B,C and D.
CREATE TABLE A (
tableID int,
column1 int,
column2 varchar,
column3 varchar,
column4 varchar,
column5 varchar,
PRIMARY KEY (column1, tableID)
);
Let us imagine that the other tables (B,C,D) have the same structure and the same data that table A, only with a different primary key, in order to respond to other queries.
If I upgrade a row in table A how I can ensure consistency of data in other tables that have the same data?
| Cassandra provides BATCH for this purpose. From the documentation:
A BATCH statement combines multiple data modification language (DML) statements (INSERT, UPDATE, DELETE) into a single logical operation, and sets a client-supplied timestamp for all columns written by the statements in the batch. Batching multiple statements can save network exchanges between the client/server and server coordinator/replicas. However, because of the distributed nature of Cassandra, spread requests across nearby nodes as much as possible to optimize performance. Using batches to optimize performance is usually not successful, as described in Using and misusing batches section. For information about the fastest way to load data, see "Cassandra: Batch loading without the Batch keyword."
Batches are atomic by default. In the context of a Cassandra batch operation, atomic means that if any of the batch succeeds, all of it will. To achieve atomicity, Cassandra first writes the serialized batch to the batchlog system table that consumes the serialized batch as blob data. When the rows in the batch have been successfully written and persisted (or hinted) the batchlog data is removed. There is a performance penalty for atomicity. If you do not want to incur this penalty, prevent Cassandra from writing to the batchlog system by using the UNLOGGED option: BEGIN UNLOGGED BATCH
UNLOGGED BATCH is almost always undesirable and I believe is removed in future versions. Normal batches provide the functionality you desire.
| Cassandra | 34,231,718 | 12 |
I am trying to set up a multinode cassandra database on two different machines.
How am i supposed to configure the cassandra.yaml file?
The datastax documentation says
listen_address¶
(Default: localhost ) The IP address or hostname that other Cassandra nodes use to connect to this node. If left unset, the hostname must resolve to the IP address of this node using /etc/hostname, /etc/hosts , or DNS. Do not specify 0.0.0.0.
When i use 'localhost' as the value of listen_address, it runs fine on the local machine , and when i use my ip address, it fails to connect. Why so?
| Configuring the nodes and seed nodes is fairly simple in Cassandra but certain steps must be followed. The procedure for setting up a multi node cluster is well documented and I will quote from the linked document.
I think it is easier to illustrate the set up of nodes with 4 instead of 2 since 2 nodes would make little sense to a running Cassandra instance. If you had 4 nodes split between 2 machines and 1 seed node on each machine the conceptual configuration would appear as follows:
node1 86.82.155.1 (seed 1)
node2 86.82.155.2
node3 192.82.156.1 (seed 2)
node4 192.82.156.2
If each of these machines is the same in terms of layout you can use the same cassandra.yaml file across all nodes.
If the nodes in the cluster are identical in terms of disk layout, shared libraries, and so on, you can use the same copy of the cassandra.yaml file on all of them
You will need to set the IP address up under the -seeds configuration in cassandra.yaml.
-seeds: internal IP address of each seed node
parameters:
- seeds: "86.82.155.1,192.82.156.1"
Understanding the difference between a node and seed node is important. If you get these IP addresses crossed you may experience issues similar to what you are describing and from your comment it appears you have corrected the configuration.
Seed nodes do not bootstrap, which is the process of a new node joining an existing cluster. For new clusters, the bootstrap process on seed nodes is skipped.
If you are having trouble grasping the node based architecture read the Achitecture in Brief document or watch the Understanding Core Concepts class.
| Cassandra | 32,689,794 | 12 |
I am trying to connect to cassandra, which is running on local desktop, via cassandra-driver for python using this simple code.
from cassandra.cluster import Cluster
cluster = Cluster()
session = cluster.connect()
and getting this error: NoHostAvailable: ('Unable to connect to any servers', {'127.0.0.1': InvalidRequest(u'code=2200 [Invalid query] message="unconfigured table schema_keyspaces"',)})
From the logs of cassandra, I see how it does establish connection, but it gets this errors:
DEBUG 05:51:00 Responding: ERROR INVALID: unconfigured table schema_columnfamilies, v=4
DEBUG 05:51:00 Responding: ERROR INVALID: unconfigured table schema_usertypes, v=4
DEBUG 05:51:00 Responding: ERROR INVALID: unconfigured table schema_columns, v=4
DEBUG 05:51:00 Responding: ERROR INVALID: unconfigured table schema_functions, v=4
DEBUG 05:51:00 Responding: ERROR INVALID: unconfigured table schema_aggregates, v=4
DEBUG 05:51:00 Responding: ERROR INVALID: unconfigured table schema_triggers, v=4
Any help to solve this problem with unconfigured tables will be appreciated.
| Are you possibly using the driver to connect to Cassandra 3.0.0-alpha1? If so, you'd need to be running the driver installed from this commit:
https://github.com/datastax/python-driver/tree/1a480f196ade42798596f5257d2cbeffcadf154f
Alternatively:
If you're just experimenting, the released drivers as of today work with all Cassandra versions 1.2 - 2.2.0
DataStax is readying a 3.0.0a1 version of the driver for use with Cassandra 3.0.0-alpha1, which will be available in pypi soon.
install the 3.0.0 alpha version of the driver as follows:
pip install --pre cassandra-driver
pip install --pre --upgrade cassandra-driver
| Cassandra | 31,824,537 | 12 |
I'm trying to start up a docker image that runs cassandra. I need to use thrift to communicate with cassandra, but it looks like that's disabled by default. Checking out the cassandra logs shows:
INFO 21:10:35 Not starting RPC server as requested.
Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
My question is: how can I enable thrift when starting this cassandra container?
I've tried to set various environment variables to no avail:
docker run --name cs1 -d -e "start_rpc=true" cassandra
docker run --name cs1 -d -e "CASSANDRA_START_RPC=true" cassandra
docker run --name cs1 -d -e "enablethrift=true" cassandra
| The sed workaround (and subsequent custom Dockerfiles that enable only this behavior) is no longer necessary.
Newer official Docker containers support a CASSANDRA_START_RPC environment variable using the -e flag. For example:
docker run --name cassandra1 -d -e CASSANDRA_START_RPC=true -p 9160:9160 -p 9042:9042 -p 7199:7199 -p 7001:7001 -p 7000:7000 cassandra
| Cassandra | 31,620,494 | 12 |
I converted an RDD[myClass] to dataframe and then register it as an
SQL table
my_rdd.toDF().registerTempTable("my_rdd")
This table is callable and can be demonstrated with following command
%sql
SELECT * from my_rdd limit 5
But the next step gives error, saying Table Not Found: my_rdd
val my_df = sqlContext.sql("SELECT * from my_rdd limit 5")
Quite newbie for Spark. Do not understand why this is happening. Can anyone help me out of this?
java.lang.RuntimeException: Table Not Found: my_rdd
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:111)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:111)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:58)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:111)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:175)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:187)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:182)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:50)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:186)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:207)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:236)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:192)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:207)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:236)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:192)
at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:177)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:182)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:172)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
at scala.collection.immutable.List.foldLeft(List.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:1071)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:1071)
at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:1069)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:915)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:68)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:73)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:75)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:77)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:79)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:81)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:83)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:85)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:87)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:89)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:91)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:93)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:95)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:97)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:99)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:101)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:103)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:105)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:107)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:109)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:111)
at $iwC$$iwC$$iwC.<init>(<console>:113)
at $iwC$$iwC.<init>(<console>:115)
at $iwC.<init>(<console>:117)
at <init>(<console>:119)
at .<init>(<console>:123)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:556)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:532)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:525)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:264)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
| Make sure to import the implicits._ from the same SQLContext. Temporary tables are kept in-memory in one specific SQLContext.
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
my_rdd.toDF().registerTempTable("my_rdd")
val my_df = sqlContext.sql("SELECT * from my_rdd LIMIT 5")
my_df.collect().foreach(println)
| Cassandra | 30,263,646 | 12 |
I am trying cassandra node driver and stuck in problem while inserting a record, it looks like cassandra driver is not able to insert float values.
Problem: When passing int value for insertion in db, api gives following error:
Debug: hapi, internal, implementation, error
ResponseError: Expected 4 or 0 byte int (8)
at FrameReader.readError (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/readers.js:291:13)
at Parser.parseError (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:185:45)
at Parser.parseBody (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:167:19)
at Parser._transform (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:101:10)
at Parser.Transform._read (_stream_transform.js:179:10)
at Parser.Transform._write (_stream_transform.js:167:12)
at doWrite (_stream_writable.js:225:10)
at writeOrBuffer (_stream_writable.js:215:5)
at Parser.Writable.write (_stream_writable.js:182:11)
at write (_stream_readable.js:601:24)
I am trying to execute following query from code:
INSERT INTO ragchews.user
(uid ,iid ,jid ,jpass ,rateCount ,numOfratedUser ,hndl ,interests ,locX ,locY ,city )
VALUES
('uid_1',{'iid1'},'jid_1','pass_1',25, 10, {'NEX1231'}, {'MUSIC'}, 21.321, 43.235, 'delhi');
parameter passed to execute() is
var params = [uid, iid, jid, jpass, rateCount, numOfratedUser, hndl, interest, locx, locy, city];
where
var locx = 32.09;
var locy = 54.90;
and call to execute looks like:
var addUserQuery = 'INSERT INTO ragchews.user (uid ,iid ,jid ,jpass ,rateCount ,numOfratedUser ,hndl ,interests ,locX ,locY ,city) VALUES (?,?,?,?,?,?,?,?,?,?,?);';
var addUser = function(user, cb){
console.log(user);
client.execute(addUserQuery, user, function(err, result){
if(err){
throw err;
}
cb(result);
});
};
CREATE TABLE ragchews.user(
uid varchar,
iid set<varchar>,
jid varchar,
jpass varchar,
rateCount int,
numOfratedUser int,
hndl set<varchar>,
interests set<varchar>,
locX float,
locY float,
city varchar,
favorite map<varchar, varchar>,
PRIMARY KEY(uid)
);
P.S
Some observations while trying to understand the issue:
Since it seems, problem is with float so i changed type float (of locX, locY) to int and re-run the code. Same error persist. Hence, it is not problem associated specifically to float CQL type.
Next, i attempted to remove all int from the INSERT query and attempted to insert only non-numeric values. This attempt successfully inputted the value into db. Hence it looks like now that, this problem may be associated with numeric types.
| Following words are as it is picked from cassandra node driver data type documentation
When encoding data, on a normal execute with parameters, the driver tries to guess the target type based on the input type. Values of type Number will be encoded as double (as Number is double / IEEE 754 value).
Consider the following example:
var key = 1000;
client.execute('SELECT * FROM table1 where key = ?', [key], callback);
If the key column is of type int, the execution fails. There are two possible ways to avoid this type of problem:
Prepare the data (recommended) - prepare the query before execution
client.execute('SELECT * FROM table1 where key = ?', [key], { prepare : true }, callback);
Hinting the target types - Hint: the first parameter is an integer`
client.execute('SELECT * FROM table1 where key = ?', [key], { hints : ['int'] }, callback);
If you are dealing with batch update then this issue may be of your interest.
| Cassandra | 26,682,873 | 12 |
I read about Cassandra 2's lightweight transactions. Is the consistency level of such a write always at QUORUM? Would this mean that even if I have a multi data center setup with 100s of nodes, then quorum of the entire cluster (majority of the row's replicas across all data centers) is involved? Won't this be really slow and wont it affect availability?
Can we do LOCAL_QUORUM or EACH_QUORUM consistency? This would be preferred if writers for data replicated across multiple data centers would always originate from a specific data center only.
| The suggested Consistency Level for lightweight transactions is SERIAL. Behind the scenes however SERIAL is even worse than QUORUM, because it's a multi-phase QUORUM. As you said the situation can get hard to handle when you have multiple DC -- Datastax estimate "effectively a degradation to one-third of normal".
There is a LOCAL_SERIAL that could be perfect for your situation where all DCs receive data from a specific DC.
Here you can find more info:
LIGHTWEIGHT TRANSACTIONS
LINEARIZABLE CONSISTENCY
HTH,Carlo
| Cassandra | 24,986,578 | 12 |
Does anyone know how to generate TimeBased UUIDs in Java/Scala?
Here is the column family:
CREATE table col(ts timeuuid)
I'm using Cassandra 1.2.4
Appreciate your help!
| If you are using the Datastax drivers you can use the utility class, UUIDs, to generate one
import com.datastax.driver.core.utils.UUIDs;
....
UUID timeBasedUuid = UUIDs.timeBased();
| Cassandra | 24,952,066 | 12 |
I want to build a RESTful API with Java and Cassandra 2.x (on Jersey framework). I'm new to both technologies so I would like to ask you is that the correct way to integrate and share Cassandra driver.
0. Get the driver though Maven
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.0.3</version>
</dependency>
1. Wrap driver's functionality with a Client class:
package com.example.cassandra;
import com.datastax.driver.core.*;
public class Client {
private Cluster cluster;
private Session session;
public Client(String node) {
connect( node );
}
private void connect(String node) {
cluster = Cluster.builder()
.addContactPoint(node)
.build();
session = cluster.connect();
}
public ResultSet execute( String cql3 ) {
return session.execute( cql3 );
}
public void close() {
cluster.close();
}
}
2. I insatiate the client in ContextListener and share it though context attribute
package com.example.listener;
import javax.servlet.ServletContext;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import com.example.cassandra.Client;
public class ExampleContextListener implements ServletContextListener {
Client cassandraClient;
public void contextInitialized(ServletContextEvent servletContextEvent) {
ServletContext ctx = servletContextEvent.getServletContext();
cassandraClient = new Client( ctx.getInitParameter( "DBHost" ) );
ctx.setAttribute( "DB", cassandraClient );
}
public void contextDestroyed(ServletContextEvent servletContextEvent) {
cassandraClient.close();
}
}
3. Now I get the client from servlet's context and use it
Client client = (Client) context.getAttribute("DB");
client.execute("USE testspace;");
ResultSet rs = client.execute("SELECT * from users;");
for (Row row : rs ) {
output += row.getString("lname") + "|";
}
Is that the correct way to do it (both from performance and architectural point of view)?
Full example available on: https://github.com/lukaszkujawa/jersey-cassandra
| I just developed what you are going to develop. What you wrote works but it's not my favourite approach. I'd rather create a Singleton (since 1 session is enough for an application). Following Joshua Bloch enum's singleton's pattern here is what I did
public enum Cassandra {
DB;
private Session session;
private Cluster cluster;
private static final Logger LOGGER = LoggerFactory.getLogger(Cassandra.class);
/**
* Connect to the cassandra database based on the connection configuration provided.
* Multiple call to this method will have no effects if a connection is already established
* @param conf the configuration for the connection
*/
public void connect(ConnectionCfg conf) {
if (cluster == null && session == null) {
cluster = Cluster.builder().withPort(conf.getPort()).withCredentials(conf.getUsername(), conf.getPassword()).addContactPoints(conf.getSeeds()).build();
session = cluster.connect(conf.getKeyspace());
}
Metadata metadata = cluster.getMetadata();
LOGGER.info("Connected to cluster: " + metadata.getClusterName() + " with partitioner: " + metadata.getPartitioner());
metadata.getAllHosts().stream().forEach((host) -> {
LOGGER.info("Cassandra datacenter: " + host.getDatacenter() + " | address: " + host.getAddress() + " | rack: " + host.getRack());
});
}
/**
* Invalidate and close the session and connection to the cassandra database
*/
public void shutdown() {
LOGGER.info("Shutting down the whole cassandra cluster");
if (null != session) {
session.close();
}
if (null != cluster) {
cluster.close();
}
}
public Session getSession() {
if (session == null) {
throw new IllegalStateException("No connection initialized");
}
return session;
}
}
And in the context listener I call connect or shutdown.
Since all exceptions in new driver are unchecked my tip for you is to create your own implementation of the Jersey ExceptionMapper mapping DriverException. One more thing, think about working with PreparedStatements rather than Strings so that Cassandra parse the query only once. In my application I followed the above patterns also for the queries (an enum singleton that prepare statements when loaded first time and then expose methods to use these statements).
HTH,
Carlo
| Cassandra | 24,687,991 | 12 |
I have a cassandra table containing 3 million rows. Now I am trying to fetch all the rows and write them to several csv files. I know it is impossible to perform select * from mytable. Could someone please tell how I can do this?
Or are there any ways to read the rows n rows by n rows without specifying any where conditions?
| as I know, one improvement in cassandra 2.0 'on the driver side' is automatic-paging. you can do something like this :
Statement stmt = new SimpleStatement("SELECT * FROM images LIMIT 3000000");
stmt.setFetchSize(100);
ResultSet rs = session.execute(stmt);
// Iterate over the ResultSet here
for more read Improvements on the driver side with Cassandra 2.0
you can find the driver here.
| Cassandra | 23,745,322 | 12 |
Question to all Cassandra experts out there.
I have a column family with about a million records.
I would like to query these records in such a way that I should be able to perform a Not-Equal-To kind of operation.
I Googled on this and it seems I have to use some sort of Map-Reduce.
Can somebody tell me what are the options available in this regard.
| I can suggest a few approaches.
1) If you have a limited number of values that you would like to test for not-equality, consider modeling those as a boolean columns (i.e.: column isEqualToUnitedStates with true or false).
2) Otherwise, consider emulating the unsupported query != X by combining results of two separate queries, < X and > X on the client-side.
3) If your schema cannot support either type of query above, you may have to resort to writing custom routines that will do client-side filtering and construct the not-equal set dynamically. This will work if you can first narrow down your search space to manageable proportions, such that it's relatively cheap to run the query without the not-equal.
So let's say you're interested in all purchases of a particular customer of every product type except Widget. An ideal query could look something like SELECT * FROM purchases WHERE customer = 'Bob' AND item != 'Widget'; Now of course, you cannot run this, but in this case you should be able to run SELECT * FROM purchases WHERE customer = 'Bob' without wasting too many resources and filter item != 'Widget' in the client application.
4) Finally, if there is no way to restrict the data in a meaningful way before doing the scan (querying without the equality check would returning too many rows to handle comfortably), you may have to resort to MapReduce. This means running a distributed job that would scan all rows in the table across the cluster. Such jobs will obviously run a lot slower than native queries, and are quite complex to set up. If you want to go this way, please look into Cassandra Hadoop integration.
| Cassandra | 21,925,525 | 12 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.