question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
In BigTable/GFS and Cassandra terminology, what is the definition of a SSTable?
| Sorted Strings Table (borrowed from google) is a file of key/value string pairs, sorted by keys
| Cassandra | 2,576,012 | 157 |
I am learning NoSQL and looking at different options for one of my client's requirements. I have gone through various resources before putting up this question (a person with little knowledge in NoSQL)
I need to store data at faster rate and read data.
Fully fail-safe and easily scalable.
Able to search through data for Analytics.
I ended up with a short list of: Cassandra and Elasticsearch
What I do understand is Cassandra is a perfect NoSQL storage solution for me, as I can write data and read data using indexes. Where it fails or it could fail is on Analytics. In the future, if I want to get data from from_date to to_date, or more ways to get data for analytics, if I don't design the Data model properly or keeping long term sight, which might be quite hard in ever changing world.
While Elastic Search is best at indexing (backed by Lucene), and can search the data randomly by throwing some random text. But does it work the same for even if I want to retrieve data from_date to to_date (I expect it might be). But the real question is, is it a Search Engine, or perfect NoSQL data storage like Cassandra? If yes, why do we still need Cassandra?
If both of these are in different world, please explain that! How do we combine them to get a more effective solution?
| One of our applications uses data that is stored into both Cassandra and ElasticSearch. We use Cassandra to access those records whenever we can, and have data duplicated into query tables designed to adhere to specific application-side requests. For a more liberal search than our query tables can allow, ElasticSearch performs that functionality nicely.
We have asked that same question (of ourselves)..."Why don't we just get everything from ElastsicSearch?"
The answer is that ElasticSearch was designed to be a search engine, and not a persistent data store. Sometimes ElasticSearch loses writes. Schema changes are difficult to do in ElasticSearch without blowing everything away and reloading. For that purpose, I have written jobs that are designed to keep ElasticSearch in-sync with our Cassandra cluster. There was also a fairly recent discussion on Quora about this topic, that yielded similar points.
That being said, ElasticSearch works great as a search engine. And Cassandra works great as a scalable, high-performance datastore. But querying data is different from searching for data. There are times that we need one or the other, and a combination of the two works well for our application. It may (or it may not) work well for yours.
As for analytics, I have had some success in using the Cassandra Spark connector, to serve more complex OLAP queries.
Edit 20200421
I've written a newer answer to a similar question:
ElasticSearch vs. ElasticSearch+Cassandra
| Cassandra | 27,054,954 | 139 |
I know there are three different, popular types of non-sql databases.
Key/Value: Redis, Tokyo Cabinet, Memcached
ColumnFamily: Cassandra, HBase
Document: MongoDB, CouchDB
I have read long blogs about it without understanding so much.
I know relational databases and get the hang around document-based databases like MongoDB/CouchDB.
Could someone tell me what the major differences are between these and the 2 former on the list?
| The main differences are the data model and the querying capabilities.
Key-value stores
The first type is very simple and probably doesn't need any further explanation.
Data model: more than key-value stores
Although there is some debate on the correct name for databases such as Cassandra, I'd like to call them column-family stores. Although key-value pairs are an essential part of Cassandra, it's not limited to just that. It allows you to nest key-value pairs, so a key could refer to multiple sub-key-value pairs.
You cannot nest key-value pairs indefinitely though. You are limited to three levels (column families) or four levels of nesting (super-column families). In case the term column family doesn't ring a bell, see the WTF is a SuperColumn article, it's a good explanation of Cassandra's data model.
Document databases, such as CouchDB and MongoDB store entire documents in the form of JSON objects. You can think of these objects as nested key-value pairs. Unlike Cassandra, you can nest key-value pairs as much as you want. JSON also supports arrays and understands different data types, such as strings, numbers and boolean values.
Querying
I believe column-family stores can only be queried by key, or by writing map-reduce functions. You cannot query the values like you would in an SQL database. If your application needs more complex queries, your application will have to create and maintain indexes in order to access the desired data.
Document databases support queries by key and map-reduce functions as well, but also allow you to do basic queries by value, such as "Give me all users with more than 10 posts". Document databases are more flexible in this way.
| Cassandra | 3,554,169 | 121 |
The three types of NoSQL databases I've read about is key-value, column-oriented, and document-oriented.
Key-value is pretty straight forward - a key with a plain value.
I've seen document-oriented databases described as like key-value, but the value can be a structure, like a JSON object. Each "document" can have all, some, or none of the same keys as another.
Column oriented seems to be very much like document oriented in that you don't specify a structure.
So what is the difference between these two, and why would you use one over the other?
I've specifically looked at MongoDB and Cassandra. I basically need a dynamic structure that can change, but not affect other values. At the same time I need to be able to search/filter specific keys and run reports. With CAP, AP is the most important to me. The data can "eventually" be synced across nodes, just as long as there is no conflict or loss of data. Each user would get their own "table".
| The main difference is that document stores (e.g. MongoDB and CouchDB) allow arbitrarily complex documents, i.e. subdocuments within subdocuments, lists with documents, etc. whereas column stores (e.g. Cassandra and HBase) only allow a fixed format, e.g. strict one-level or two-level dictionaries.
| Cassandra | 7,565,012 | 114 |
When experimenting with Cassandra I've observed that Cassandra listens to the following ports:
TCP *:8080
TCP *:8888
TCP *:57311
TCP *:57312
TCP 127.0.0.1:7000
TCP 127.0.0.1:9160
UDP 127.0.0.1:700
How does Cassandra use each of the ports listed?
| @Schildmeijer is largely right, however port 7001 is still used when using TLS Encrypted Internode communication
So my complete list would be for current versions of Cassandra:
7199 - JMX (was 8080 pre Cassandra 0.8.xx)
7000 - Internode communication (not used if TLS enabled)
7001 - TLS Internode communication (used if TLS enabled)
9160 - Thrift client API
9042 - CQL native transport port
| Cassandra | 2,359,159 | 110 |
Is there a command to all the rows present in a cql table in cassandra like the one in sql?
delete from TABLE
Going by the documentation, I don't find any way to perform delete operation without a where condition.
DELETE col1 FROM SomeTable WHERE userID = 'some_key_value';
| To remove all rows from a CQL Table, you can use the TRUNCATE command:
TRUNCATE keyspace_name.table_name;
Or if you are already using the keyspace that contains your target table:
TRUNCATE table_name;
Important to note, but by default Cassandra creates a snapshot of the table just prior to TRUNCATE. Be sure to clean up old snapshots, or set auto_snapshot: false in your cassandra.yaml.
| Cassandra | 25,483,308 | 108 |
I am nearly landed at Cassandra after my research on large scale data storage solutions. But its generally said that Hbase is better solution for large scale data processing and analysis.
While both are same key/value storage and both are/can run (Cassandra recently) Hadoop layer then what makes Hadoop a better candidate when processing/analysis is required on large data.
I also found good details about both at
http://ria101.wordpress.com/2010/02/24/hbase-vs-cassandra-why-we-moved/
but I'm still looking for concrete advantages of Hbase.
While I am more convinced about Cassandra because its simplicity for adding nodes and seamless replication and no point of failure features. And it also keeps secondary index feature so its a good plus.
| As a Cassandra developer, I'm better at answering the other side of the question:
Cassandra scales better. Cassandra is known to scale to over 400 nodes in a cluster; when Facebook deployed Messaging on top of HBase they had to shard it across 100-node HBase sub-clusters.
Cassandra supports hundreds, even thousands of ColumnFamilies. "HBase currently does not do well with anything above two or three column families."
As a fully distributed system with no "special" nodes or processes, Cassandra is simpler to set up and operate, easier to troubleshoot, and more robust.
Cassandra's support for multi-master replication means that not only do you get the obvious power of multiple datacenters -- geographic redundancy, local latencies -- but you can also split realtime and analytical workloads into separate groups, with realtime, bidirectional replication between them. If you don't split those workloads apart they will contend spectacularly.
Because each Cassandra node manages its own local storage, Cassandra has a substantial performance advantage that is unlikely to be narrowed significantly. (E.g., it's standard practice to put the Cassandra commitlog on a separate device so it can do its sequential writes unimpeded by random i/o from read requests.)
Cassandra allows you to choose how strong you want it to require consistency to be on a per-operation basis. Sometimes this is misunderstood as "Cassandra does not give you strong consistency," but that is incorrect.
Cassandra offers RandomPartitioner as well as the more Bigtable-like OrderedPartitioner. RandomPartitioner is much less prone to hot spots.
Cassandra offers on- or off-heap caching with performance comparable to memcached, but without the cache consistency problems or complexity of requiring extra moving parts
Non-Java clients are not second-class citizens
To my knowledge, the main advantage HBase has right now (HBase 0.90.4 and Cassandra 0.8.4) is that Cassandra does not yet support transparent data compression. (This has been added for Cassandra 1.0, due in early October, but today that is a real advantage for HBase.) HBase may also be better optimized for the kinds of range scans done by Hadoop batch processing.
There are also some things that are not necessarily better, or worse, just different. HBase adheres more strictly to the Bigtable data model, where each column is versioned implicitly. Cassandra drops versioning, and adds SuperColumns instead.
| Cassandra | 7,237,271 | 87 |
There are many tables in cassandra database, which contain column titled user_id. The values user_id are referred to user stored in table users. As some users are deleted, I would like to delete orphan records in all tables that contain column titled user_id.
Is there a way to list all tables using CassandraSQLContext or any other built-in method or custom procedure in order to avoid explicitly defining the list of tables?
| From cqlsh execute describe tables;
| Cassandra | 38,696,316 | 86 |
Reading several papers and documents on internet, I found many contradictory information about the Cassandra data model. There are many which identify it as a column oriented database, other as a row-oriented and then who define it as a hybrid way of both.
According to what I know about how Cassandra stores file, it uses the *-Index.db file to access at the right position of the *-Data.db file where it is stored the bloom filter, column index and then the columns of the required row.
In my opinion, this is strictly row-oriented. Is there something I'm missing?
|
If you take a look at the Readme file at Apache Cassandra git repo, it says that,
Cassandra is a partitioned row store. Rows are organized into tables
with a required primary key.
Partitioning means that Cassandra can distribute your data across
multiple machines in an application-transparent matter. Cassandra will
automatically repartition as machines are added and removed from the
cluster.
Row store means that like relational databases, Cassandra organizes
data by rows and columns.
Column oriented or columnar databases are stored on disk column wise.
e.g: Table Bonuses table
ID Last First Bonus
1 Doe John 8000
2 Smith Jane 4000
3 Beck Sam 1000
In a row-oriented database management system, the data would be stored like this: 1,Doe,John,8000;2,Smith,Jane,4000;3,Beck,Sam,1000;
In a column-oriented database management system, the data would be stored like this:
1,2,3;Doe,Smith,Beck;John,Jane,Sam;8000,4000,1000;
Cassandra is basically a column-family store
Cassandra would store the above data as,
"Bonuses" : {
row1 : { "ID":1, "Last":"Doe", "First":"John", "Bonus":8000},
row2 : { "ID":2, "Last":"Smith", "First":"Jane", "Bonus":4000}
...
}
Also, the number of columns in each row doesn't have to be the same. One row can have 100 columns and the next row can have only 1 column.
Read this for more details.
| Cassandra | 13,010,225 | 84 |
I need to run a shell script file using nodeJS that executes a set of Cassandra DB commands. Can anybody please help me on this.
inside db.sh file:
create keyspace dummy with replication = {'class':'SimpleStrategy','replication_factor':3}
create table dummy (userhandle text, email text primary key , name text,profilepic)
| You could use "child process" module of nodejs to execute any shell commands or scripts with in nodejs. Let me show you with an example, I am running a shell script(hi.sh) with in nodejs.
hi.sh
echo "Hi There!"
node_program.js
const { exec } = require('child_process');
var yourscript = exec('sh hi.sh',
(error, stdout, stderr) => {
console.log(stdout);
console.log(stderr);
if (error !== null) {
console.log(`exec error: ${error}`);
}
});
Here, when I run the nodejs file, it will execute the shell file and the output would be:
Run
node node_program.js
output
Hi There!
You can execute any script just by mentioning the shell command or shell script in exec callback.
| Cassandra | 44,647,778 | 83 |
I'm looking at both projects and I can't really see the difference
from Cassandra Site:
Cassandra is a highly scalable, eventually consistent, distributed, structured key-value store...Cassandra is eventually consistent. Like BigTable, Cassandra provides a ColumnFamily-based data model richer than typical key/value systems.
from CouchDB Site:
Apache CouchDB is a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API.
That said, I see the specific differences between each project as: access methods, written languages, etc. but to put AN EXAMPLE, when you talk about SOLR or Sphinx you know both are indexers with big differences but at the end are indexers.
Can I say here that Cassandra and CouchDB are non-relational databases that in some cases one can replace the other?
| CouchDB is a document store. You put documents (JSON objects) in it and define views (indexes) over them. The objects can be arbitrarily complex with potentially deep structure. Further, they are not constrained to following some consistent schema.
Cassandra is a ragged-table key-value store. It just stores rows, each of which has a set of named columns grouped in to families with values. It sounds quite close to BigTable; BigTable doesn't require each row to have the same structure (unlike an SQL database). The values may have some structure, but this kind of store doesn't know anything about that -- they're just strings/byte sequences.
Yes, they are both non-relational databases, and there is probably a fair amount of overlap in their applicability, but they do have distinctly different data organization models. Each can probably be forced into emulating the other, but each model will map best to a different set of problems.
| Cassandra | 1,104,624 | 81 |
Merkle Trees are used as an anti-entropy mechanism in several distributed, replicated key/value stores:
Dynamo
Riak
Cassandra
No doubt an anti-entropy mechanism is A Good Thing - transient failures just happen, in production.
I'm just not sure I understand why Merkle Trees are the popular approach.
Sending a complete Merkle tree to a peer involves sending the local key-space to that peer, along with
hashes of each key value, stored in the lowest levels of the tree.
Diffing a Merkle tree sent from a peer requires having a Merkle tree of your own.
Since both peers must already have a sorted key / value-hash space on hand, why not do a linear merge to detect discrepancies?
I'm just not convinced that the tree structure provides any kind of savings when you factor in upkeep costs, and the fact
that linear passes over the tree leaves are already being done just to serialize the representation over the wire.
To ground this out, a straw-man alternative might be to have nodes exchange arrays of hash digests,
which are incrementally updated and bucketed by modulo ring-position.
What am I missing?
| Merkle trees limit the amount of data transferred when synchronizing. The general assumptions are:
Network I/O is more expensive than local I/O + computing the hashes.
Transferring the entire sorted key space is more expensive than progressively limiting the comparison over several steps.
The key spaces have fewer discrepancies than similarities.
A Merkle Tree exchange would look like this:
Start with the root of the tree (a list of one hash value).
The origin sends the list of hashes at the current level.
The destination diffs the list of hashes against its own and then
requests subtrees that are different. If there are no
differences, the request can terminate.
Repeat steps 2 and 3 until leaf nodes are reached.
The origin sends the values of the keys in the resulting set.
In the typical case, the complexity of synchronizing the key spaces will be log(N). Yes, at the extreme, where there are no keys in common, the operation will be equivalent to sending the entire sorted list of hashes, O(N). One could amortize the expense of building Merkle trees by building them dynamically as writes come in and keeping the serialized form on disk.
I can't speak to how Dynamo or Cassandra use Merkle trees, but Riak stopped using them for intra-cluster synchronization (hinted handoff and read-repair are sufficient in most cases). We have plans to add them back later after some internal architectural bits have changed.
For more information about Riak, we encourage you to join the mailing list: http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
| Cassandra | 5,486,304 | 81 |
Does anyone know the difference between the two CQL data types text and varchar in Cassandra? The Cassandra documentation describes both types as "UTF-8 encoded string" and nothing more.
| text is just an alias for varchar!
The documentation:
Using cql3 - Datastax
CQL commands - Datastax
Documentation on Datatypes in CQL - Datastax
CQL3 Documentation - Apache
EDIT
Here's the link to the C* 1.2 docs. The text vs varchar info is still the same, however this document contains some extra datatypes.
EDIT v2
Documentation links have been updated to the docs for C* 3. I couldn't find a good alternative for the C* 1.2 docs.
| Cassandra | 17,530,230 | 74 |
I have a Cassandra table that for simplicity looks something like:
key: text
jsonData: text
blobData: blob
I can create a basic data frame for this using spark and the spark-cassandra-connector using:
val df = sqlContext.read
.format("org.apache.spark.sql.cassandra")
.options(Map("table" -> "mytable", "keyspace" -> "ks1"))
.load()
I'm struggling though to expand the JSON data into its underlying structure. I ultimately want to be able to filter based on the attributes within the json string and return the blob data. Something like jsonData.foo = "bar" and return blobData. Is this currently possible?
| Spark >= 2.4
If needed, schema can be determined using schema_of_json function (please note that this assumes that an arbitrary row is a valid representative of the schema).
import org.apache.spark.sql.functions.{lit, schema_of_json, from_json}
import collection.JavaConverters._
val schema = schema_of_json(lit(df.select($"jsonData").as[String].first))
df.withColumn("jsonData", from_json($"jsonData", schema, Map[String, String]().asJava))
Spark >= 2.1
You can use from_json function:
import org.apache.spark.sql.functions.from_json
import org.apache.spark.sql.types._
val schema = StructType(Seq(
StructField("k", StringType, true), StructField("v", DoubleType, true)
))
df.withColumn("jsonData", from_json($"jsonData", schema))
Spark >= 1.6
You can use get_json_object which takes a column and a path:
import org.apache.spark.sql.functions.get_json_object
val exprs = Seq("k", "v").map(
c => get_json_object($"jsonData", s"$$.$c").alias(c))
df.select($"*" +: exprs: _*)
and extracts fields to individual strings which can be further casted to expected types.
The path argument is expressed using dot syntax, with leading $. denoting document root (since the code above uses string interpolation $ has to be escaped, hence $$.).
Spark <= 1.5:
Is this currently possible?
As far as I know it is not directly possible. You can try something similar to this:
val df = sc.parallelize(Seq(
("1", """{"k": "foo", "v": 1.0}""", "some_other_field_1"),
("2", """{"k": "bar", "v": 3.0}""", "some_other_field_2")
)).toDF("key", "jsonData", "blobData")
I assume that blob field cannot be represented in JSON. Otherwise you cab omit splitting and joining:
import org.apache.spark.sql.Row
val blobs = df.drop("jsonData").withColumnRenamed("key", "bkey")
val jsons = sqlContext.read.json(df.drop("blobData").map{
case Row(key: String, json: String) =>
s"""{"key": "$key", "jsonData": $json}"""
})
val parsed = jsons.join(blobs, $"key" === $"bkey").drop("bkey")
parsed.printSchema
// root
// |-- jsonData: struct (nullable = true)
// | |-- k: string (nullable = true)
// | |-- v: double (nullable = true)
// |-- key: long (nullable = true)
// |-- blobData: string (nullable = true)
An alternative (cheaper, although more complex) approach is to use an UDF to parse JSON and output a struct or map column. For example something like this:
import net.liftweb.json.parse
case class KV(k: String, v: Int)
val parseJson = udf((s: String) => {
implicit val formats = net.liftweb.json.DefaultFormats
parse(s).extract[KV]
})
val parsed = df.withColumn("parsedJSON", parseJson($"jsonData"))
parsed.show
// +---+--------------------+------------------+----------+
// |key| jsonData| blobData|parsedJSON|
// +---+--------------------+------------------+----------+
// | 1|{"k": "foo", "v":...|some_other_field_1| [foo,1]|
// | 2|{"k": "bar", "v":...|some_other_field_2| [bar,3]|
// +---+--------------------+------------------+----------+
parsed.printSchema
// root
// |-- key: string (nullable = true)
// |-- jsonData: string (nullable = true)
// |-- blobData: string (nullable = true)
// |-- parsedJSON: struct (nullable = true)
// | |-- k: string (nullable = true)
// | |-- v: integer (nullable = false)
| Cassandra | 34,069,282 | 71 |
The problem that I am having is that I want to run the following command (and I can't):
cqlsh < cql_directory/cql_create_stuff.cql
Because I have not logged in to cqlsh.
So I logged in:
cqlsh -u 'my_username' -p 'my_super_secret_password'
and now I tried doing the command in cqlsh shell but It just responds with a syntax error.
Basically, how do I login into cqlsh and run an external CQL script in my file system?
| Use the SOURCE
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/source_r.html
You can use -f option as well to execute commands from file
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/cqlsh.html
| Cassandra | 22,495,409 | 66 |
I want to clarify very basic concept of replication factor and consistency level in Cassandra. Highly appreciate if someone can provide answer to below questions.
RF- Replication Factor
RC- Read Consistency
WC- Write Consistency
2 cassandra nodes (Ex: A, B) RF=1, RC=ONE, WC=ONE or ANY
can I write data to node A and read from node B ?
what will happen if A goes down ?
3 cassandra nodes (Ex: A, B, C) RF=2, RC=QUORUM, WC=QUORUM
can I write data to node A and read from node C ?
what will happen if node A goes down ?
3 cassandra nodes (Ex: A, B, C) RF=3, RC=QUORUM, WC=QUORUM
can I write data to node A and read from node C ?
what will happen if node A goes down ?
| Short summary: Replication factor describes how many copies of your data exist. Consistency level describes the behavior seen by the client. Perhaps there's a better way to categorize these.
As an example, you can have a replication factor of 2. When you write, two copies will always be stored, assuming enough nodes are up. When a node is down, writes for that node are stashed away and written when it comes back up, unless it's down long enough that Cassandra decides it's gone for good.
Now say in that example you write with a consistency level of ONE. The client will receive a success acknowledgement after a write is done to one node, without waiting for the second write. If you did a write with a CL of ALL, the acknowledgement to the client will wait until both copies are written. There are very many other consistency level options, too many to cover all the variants here. Read the Datastax doc, though, it does a good job of explaining them.
In the same example, if you read with a consistency level of ONE, the response will be sent to the client after a single replica responds. Another replica may have newer data, in which case the response will not be up-to-date. In many contexts, that's quite sufficient. In others, the client will need the most up-to-date information, and you'll use a different consistency level on the read - perhaps a level ALL. In that way, the consistency of Cassandra and other post-relational databases is tunable in ways that relational databases typically are not.
Now getting back to your examples.
Example one: Yes, you can write to A and read from B, even if B doesn't have its own replica. B will ask A for it on your client's behalf. This is also true for your other cases where the nodes are all up. When they're all up, you can write to one and read from another.
For writes, with WC=ONE, if the node for the single replica is up and is the one you're connect to, the write will succeed. If it's for the other node, the write will fail. If you use ANY, the write will succeed, assuming you're talking to the node that's up. I think you also have to have hinted handoff enabled for that. The down node will get the data later, and you won't be able to read it until after that occurs, not even from the node that's up.
In the other two examples, replication factor will affect how many copies are eventually written, but doesn't affect client behavior beyond what I've described above. The QUORUM will affect client behavior in that you will have to have a sufficient number of nodes up and responding for writes and reads. If you get lucky and at least (nodes/2) + 1 nodes are up out of the nodes you need, then writes and reads will succeed. If you don't have enough nodes with replicas up, reads and writes will fail. Overall some QUORUM reads and writes can succeed if a node is down, assuming that that node is either not needed to store your replica, or if its outage still leaves enough replica nodes available.
| Cassandra | 24,587,869 | 66 |
This is a super basic question but it's actually been bugging me for days. Is there a good way to obtain the equivalent of a COUNT(*) of a given table in Cassandra?
I will be moving several hundreds of millions of rows into C* for some load testing and I'd like to at least get a row count on some sample ETL jobs before I move massive amounts of data over the network.
The best idea I have is to basically loop over each row with Python and auto increment a counter. Is there a better way to determine (or even estimate) the row size of a C* table? I've also poked around Datastax Ops Center to see if I can determine the row size there. If you can, I don't see how it's possible.
Anyone else needed to get a count(*) of a table in C*? If so, how'd you go about doing it?
| Yes, you can use COUNT(*). Here's the documentation.
A SELECT expression using COUNT(*) returns the number of rows that matched the query. Alternatively, you can use COUNT(1) to get the same result.
Count the number of rows in the users table:
SELECT COUNT(*) FROM users;
| Cassandra | 26,620,151 | 66 |
I am trying to insert into my CQL table from the command line. I am able to insert everything. But I am wondering if I have a timestamp column, then how can I insert into timestamp column from the command line? Basically, I want to insert current timestamp whenever I am inserting into my CQL table -
Currently, I am hardcoding the timestamp whenever I am inserting into my below CQL table -
CREATE TABLE TEST (ID TEXT, NAME TEXT, VALUE TEXT, LAST_MODIFIED_DATE TIMESTAMP, PRIMARY KEY (ID));
INSERT INTO TEST (ID, NAME, VALUE, LAST_MODIFIED_DATE) VALUES ('1', 'elephant', 'SOME_VALUE', 1382655211694);
Is there any way to get the current timestamp using some predefined functions in CQL so that while inserting into above table, I can use that method to get the current timestamp and then insert into above table?
| You can use the timeuuid functions now() and dateof() (or in later versions of Cassandra, toTimestamp()), e.g.,
INSERT INTO TEST (ID, NAME, VALUE, LAST_MODIFIED_DATE)
VALUES ('2', 'elephant', 'SOME_VALUE', dateof(now()));
The now function takes no arguments and generates a new unique timeuuid (at the time where the statement using it is executed). The dateOf function takes a timeuuid argument and extracts the embedded timestamp. (Taken from the CQL documentation on timeuuid functions).
Cassandra >= 2.2.0-rc2
dateof() was deprecated in Cassandra 2.2.0-rc2. For later versions you should replace its use with toTimestamp(), as follows:
INSERT INTO TEST (ID, NAME, VALUE, LAST_MODIFIED_DATE)
VALUES ('2', 'elephant', 'SOME_VALUE', toTimestamp(now()));
| Cassandra | 19,623,432 | 65 |
Given that TimeUUID handily allows you to use now() in CQL, are there any reasons you wouldn't just go ahead and always use TimeUUID instead of plain old UUID?
| UUID and TIMEUUID are stored the same way in Cassandra, and they only really represent two different sorting implementations.
TIMEUUID columns are sorted by their time components first, and then by their raw bytes, whereas UUID columns are sorted by their version first, then if both are version 1 by their time component, and finally by their raw bytes. Curiosly the time component sorting implementations are duplicated between UUIDType and TimeUUIDType in the Cassandra code, except for different formatting.
I think of the UUID vs. TIMEUUID question primarily as documentation: if you choose TIMEUUID you're saying that you're storing things in chronological order, and that these things can occur at the same time, so a simple timestamp isn't enough. Using UUID says that you don't care about order (even if in practice the columns will be ordered by time if you put version 1 UUIDs in them), you just want to make sure that things have unique IDs.
Even if using NOW() to generate UUID values is convenient, it's also very surprising to other people reading your code.
It probably does not matter much in the grand scheme of things, but sorting non-version 1 UUIDs is a bit faster than version 1, so if you have a UUID column and generate the UUIDs yourself, go for another version.
| Cassandra | 17,945,677 | 64 |
I'm new in cassandra, and I have to export the result of a specific query to a csv file.
I found the COPY command, but (from what I understand) it allows you only to copy an already existing table to a csv file, and what I want is to copy directly the stdout of my query to the csv file. is there any way to do it with COPY command or with another way ?
My command is style (select column1, column2 from table where condition = xy) and I'm using cqlsh.
| If you don't mind your data using a pipe ('|') as a delimiter, you can try using the -e flag on cqlsh. The -e flag allows you to send a query to Cassandra from the command prompt, where you could redirect or even perform a grep/awk/whatever on your output.
$ bin/cqlsh -e'SELECT video_id,title FROM stackoverflow.videos' > output.txt
$ cat output.txt
video_id | title
--------------------------------------+---------------------------
2977b806-df76-4dd7-a57e-11d361e72ce1 | Star Wars
ab696e1f-78c0-45e6-893f-430e88db7f46 | The Witches of Whitewater
15e6bc0d-6195-4d8b-ad25-771966c780c8 | Pulp Fiction
(3 rows)
Older versions of cqlsh don't have the -e flag. For older versions of cqlsh, you can put your command into a file, and use the -f flag.
$ echo "SELECT video_id,title FROM stackoverflow.videos;" > select.cql
$ bin/cqlsh -f select.cql > output.txt
From here, doing a cat on output.txt should yield the same rows as above.
| Cassandra | 26,909,408 | 63 |
For a bit of background - this question deals with a project running on a single small EC2 instance, and is about to migrate to a medium one. The main components are Django, MySQL and a large number of custom analysis tools written in python and java, which do the heavy
lifting. The same machine is running Apache as well.
The data model looks like the following - a large amount of real time data comes in streamed from various networked sensors, and ideally, I'd like to establish a long-poll approach rather than the current poll every 15 minutes approach (a limitation of computing stats and writing into the database itself). Once the data comes in, I store the raw version in
MySQL, let the analysis tools loose on this data, and store statistics in another few tables. All of this is rendered using Django.
Relational features I would need -
Order by [SliceRange in Cassandra's API seems to satisy this]
Group by
Manytomany relations between multiple tables [Cassandra SuperColumns seem to do well for one to many]
Sphinx on this gives me a nice full text engine, so thats a necessity too. [On Cassandra, the Lucandra project seems to satisfy this need]
My major problem is that data reads are extremely slow (and writes aren't that hot either). I don't want to throw a lot of money and hardware on it right now, and I'd prefer something that can scale easily with time. Vertically scaling MySQL is not trivial in that sense (or cheap).
So essentially, after having read a lot about NOSQL and experimented with things like MongoDB, Cassandra and Voldemort, my questions are,
On a medium EC2 instance, would I gain any benefits in reads/writes by shifting to something like Cassandra? This article (pdf) definitely seems to suggest that. Currently, I'd say a few hundred writes per minute would be the norm. For reads - since the data changes every 5 minutes or so, cache invalidation has to happen pretty quickly. At some point, it should be able to handle a large number of concurrent users as well. The app performance currently gets killed on MySQL doing some joins on large tables even if indexes are created - something to the order of 32k rows takes more than a minute to render. (This may be an artifact of EC2 virtualized I/O as well). Size of tables is around 4-5 million rows, and there are about 5 such tables.
Everyone talks about using Cassandra on multiple nodes, given the CAP theorem and eventual consistency. But, for a project that is just beginning to grow, does it make sense
to deploy a one node cassandra server? Are there any caveats? For instance, can it replace MySQL as a backend for Django? [Is this recommended?]
If I do shift, I'm guessing I'll have to rewrite parts of the app to do a lot more "administrivia" since I'd have to do multiple lookups to fetch rows.
Would it make any sense to just use MySQL as a key value store rather than a relational engine, and go with that? That way I could utilize a large number of stable APIs available, as well as a stable engine (and go relational as needed). (Brett Taylor's post from Friendfeed on this - http://bret.appspot.com/entry/how-friendfeed-uses-mysql)
Any insights from people who've done a shift would be greatly appreciated!
Thanks.
| Cassandra and the other distributed databases available today do not provide the kind of ad-hoc query support you are used to from sql. This is because you can't distribute queries with joins performantly, so the emphasis is on denormalization instead.
However, Cassandra 0.6 (beta officially out tomorrow, but you can build from the 0.6 branch yourself if you're impatient) supports Hadoop map/reduce for analytics, which actually sounds like a good fit for you.
Cassandra provides excellent support for adding new nodes painlessly, even to an initial group of one.
That said, at a few hundred writes/minute you're going to be fine on mysql for a long, long time. Cassandra is much better at being a key/value store (even better, key/columnfamily) but MySQL is much better at being a relational database. :)
There is no django support for Cassandra (or other nosql database) yet. They are talking about doing something for the next version after 1.2, but based on talking to django devs at pycon, nobody is really sure what that will look like yet.
| Cassandra | 2,332,113 | 60 |
I'm working on a real-time advertising platform with a heavy emphasis on performance. I've always developed with MySQL, but I'm open to trying something new like MongoDB or Cassandra if significant speed gains can be achieved. I've been reading about both all day, but since both are being rapidly developed, a lot of the information appears somewhat dated.
The main data stored would be entries for each click, incremented rows for views, and information for each campaign (just some basic settings, etc). The speed gains need to be found in inserting clicks, updating view totals, and generating real-time statistic reports. The platform is developed with PHP.
Or maybe none of these?
| There are several ways to achieve this with all of the technologies listed. It is more a question of how you use them. Your ideal solution may use a combination of these, with some consideration for usage patterns. I don't feel that the information out there is that dated because the concepts at play are very fundamental. There may be new NoSQL databases and fixes to existing ones, but your question is primarily architectural.
NoSQL solutions like MongoDB and Cassandra get a lot of attention for their insert performance. People tend to complain about the update/insert performance of relational databases but there are ways to mitigate these issues.
Starting with MySQL you could review O'Reilly's High Performance MySQL, optimise the schema, add more memory perhaps run this on different hardware from the rest of your app (assuming you used MySQL for that), or partition/shard data. Another area to consider is your application. Can you queue inserts and updates at the application level before insertion into the database? This will give you some flexibility and is probably useful in all cases. Depending on how your final schema looks, MySQL will give you some help with extracting the data as long as you are comfortable with SQL. This is a benefit if you need to use 3rd party reporting tools etc.
MongoDB and Cassandra are different beasts. My understanding is that it was easier to add nodes to the latter but this has changed since MongoDB has replication etc built-in. Inserts for both of these platforms are not constrained in the same manner as a relational database. Pulling data out is pretty quick too, and you have a lot of flexibility with data format changes. The tradeoff is that you can't use SQL (a benefit for some) so getting reports out may be trickier. There is nothing to stop you from collecting data in one of these platforms and then importing it into a MySQL database for further analysis.
Based on your requirements there are tools other than NoSQL databases which you should look at such as Flume. These make use of the Hadoop platform which is used extensively for analytics. These may have more flexibility than a database for what you are doing. There is some content from Hadoop World that you might be interested in.
| Cassandra | 6,162,789 | 60 |
I'm quite a newbie to Spring boot, but here's the problem I'm facing now:
// Application.java
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@Autowired
private Cluster cluster = null;
@PostConstruct
private void migrateCassandra() {
Database database = new Database(this.cluster, "foo");
MigrationTask migration = new MigrationTask(database, new MigrationRepository());
migration.migrate();
}
}
So basically, I'm trying to bootstrap a spring application, and after that, do some cassandra migrations.
I also have defined a repository for my user model:
// UserRepo.java
public interface UserRepo extends CassandraRepository<User> {
}
Now I'm trying to test my repo class using the following simple test case:
// UserRepoTest.java
@RunWith(SpringRunner.class)
@AutoConfigureTestDatabase(replace = Replace.NONE)
@DataJpaTest
public class UserRepoTest {
@Autowired
private UserRepo userRepo = null;
@Autowired
private TestEntityManager entityManager = null;
@Test
public void findOne_whenUserExists_thenReturnUser() {
String id = UUID.randomUUID().toString();
User user = new User();
user.setId(id);
this.entityManager.persist(user);
assertEquals(this.userRepo.findOne(user.getId()).getId(), id);
}
@Test
public void findOne_whenUserNotExists_thenReturnNull() {
assertNull(this.userRepo.findOne(UUID.randomUUID().toString()));
}
}
I would expect the test to pass, but instead, I got an error saying "No qualifying bean of type 'com.datastax.driver.core.Cluster' available". It looks like spring failed to autowire the cluster object, but why is that? How do I fix this? Thanks a lot!
| The test environment needs to know where your beans are defined, so you have to tell it the location.
In your test class, add the @ContextConfiguration annotation:
@RunWith(SpringRunner.class)
@AutoConfigureTestDatabase(replace = Replace.NONE)
@DataJpaTest
@ContextConfiguration(classes = {YourBeans.class, MoreOfYourBeans.class})
public class UserRepoTest {
@Autowired
private UserRepo userRepo = null;
@Autowired
private TestEntityManager entityManager = null;
| Cassandra | 44,925,324 | 60 |
I'm currently experimenting around with Cassandra.
On the client-side (with Hector) I look up a few keys like this:
ColumnFamilyResult<String, String> result = template.queryColumns(Arrays.asList("key1","key2","key3"));
Most of the time it seems to work. But other times I get a timeout exception on the client:
Caused by: me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:35)
at me.prettyprint.cassandra.service.template.ThriftColumnFamilyTemplate$1.execute(ThriftColumnFamilyTemplate.java:100)
at me.prettyprint.cassandra.service.template.ThriftColumnFamilyTemplate$1.execute(ThriftColumnFamilyTemplate.java:88)
at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.service.template.ThriftColumnFamilyTemplate.sliceInternal(ThriftColumnFamilyTemplate.java:88)
at me.prettyprint.cassandra.service.template.ThriftColumnFamilyTemplate.doExecuteSlice(ThriftColumnFamilyTemplate.java:46)
at me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.queryColumns(ColumnFamilyTemplate.java:113)
at info.gamlor.experiments.Cassandra.readObjectByKey(ComplexCassandra.java:255)
Caused by: TimedOutException()
at org.apache.cassandra.thrift.Cassandra$get_slice_result.read(Cassandra.java:7772)
at org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(Cassandra.java:570)
at org.apache.cassandra.thrift.Cassandra$Client.get_slice(Cassandra.java:542)
at me.prettyprint.cassandra.service.template.ThriftColumnFamilyTemplate$1.execute(ThriftColumnFamilyTemplate.java:95)
And on the server this exception shows up:
ERROR 11:33:55,312 Exception in thread Thread[ReadStage:91,5,main]
java.lang.AssertionError: DecoratedKey(4948402862350542345439897754126541659, 6932) != DecoratedKey(132475956107784875457507977471906551877, 726f6f74) in C:\tem
p\cassandra\lib\cassandra\data\CassandraPolepos\ComplexObjects\CassandraPolepos-ComplexObjects-hd-2-Data.db
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:58)
at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:66)
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:78)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:63)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1331)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1193)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1128)
at org.apache.cassandra.db.Table.getRow(Table.java:378)
at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:816)
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Sometimes the key-values in the DecoratedKey(...) part takes up pages.
Anyone a hint what I'm doing wrong. Or how to investigate this issue.
Thanks.
| This is either https://issues.apache.org/jira/browse/CASSANDRA-4687 or https://issues.apache.org/jira/browse/CASSANDRA-5202
You can run the command "nodetool invalidatekeycache" on all of the server showing the error. If it keeps coming back you may want to disable key cache.
If it is 5202 it should only happen after dropping and recreating a column family using the same name.
| Cassandra | 11,623,356 | 59 |
How am I supposed to bot a new Cassandra node when I get this error?
INFO [SSTableBatchOpen:1] 2014-02-25 01:51:17,132 SSTableReader.java (line 223) Opening /var/lib/cassandra/data/system/local/system-local-jb-5 (5725 bytes)
ERROR [main] 2014-02-25 01:51:17,377 CassandraDaemon.java (line 237) Fatal exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name Test Cluster != configured name thisisstupid
at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:542)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:233)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:462)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:552)
Name of cluster in the cassandra.yaml file is:
cluster_name: 'thisisstupid'
How do I resolve?
| You can rename the cluster without deleting data by updating it's name in the system.local table (but you have to do this for each node...)
cqlsh> UPDATE system.local SET cluster_name = 'test' where key='local';
# flush the sstables to persist the update.
bash $ ./nodetool flush
Finally you need to rename the cluster to the new name in cassandra.yaml (again on each node)
| Cassandra | 22,006,887 | 59 |
For Cassandra, do UPDATEs become an implied INSERT if the selected row does not exist? That is, if I say
UPDATE users SET name = "Raedwald" WHERE id = 545127
and id is the PRIMARY KEY of the users table, and the table has no row with a key of 545127, will that be equivalent to
INSERT INTO users (id, name) VALUES (545127, "Raedwald")
I know that the opposite is true: an INSERT for an id that already exists becomes an UPDATE of the row with that id. Older Cassandra documentation talked about inserts actually being "upserts" for that reason.
I'm interested in the case for CQL3, Cassandra version 1.2+.
| Yes, for Cassandra UPDATE is synonymous with INSERT, as explained in the CQL documentation where it says the following about UPDATE:
Note that unlike in SQL, UPDATE does not check the prior existence of the row: the row is created if none existed before, and updated otherwise. Furthermore, there is no mean to know which of creation or update happened. In fact, the semantic of INSERT and UPDATE are identical.
For the semantics to be different, Cassandra would need to do a read to know if the row already exists. Cassandra is write optimized, so you can always assume it doesn't do a read before write on any write operation. The only exception is counters (unless replicate_on_write = false), in which case replication on increment involves a read.
| Cassandra | 17,348,558 | 56 |
I'm looking for a way to delete all of the rows from a given column family in cassandra.
This is the equivalent of TRUNCATE TABLE in SQL.
| You can use the truncate thrift call, or the TRUNCATE <table> command in CQL.
http://www.datastax.com/docs/1.0/references/cql/TRUNCATE
| Cassandra | 10,520,110 | 54 |
Suppose I have a column family:
CREATE TABLE update_audit (
scopeid bigint,
formid bigint,
time timestamp,
record_link_id bigint,
ipaddress text,
user_zuid bigint,
value text,
PRIMARY KEY ((scopeid, formid), time)
) WITH CLUSTERING ORDER BY (time DESC)
With two secondary indexes, where record_link_id is a high-cardinality column:
CREATE INDEX update_audit_id_idx ON update_audit (record_link_id);
CREATE INDEX update_audit_user_zuid_idx ON update_audit (user_zuid);
According to my knowledge Cassandra will create two hidden column families like so:
CREATE TABLE update_audit_id_idx(
record_link_id bigint,
scopeid bigint,
formid bigint,
time timestamp
PRIMARY KEY ((record_link_id), scopeid, formid, time)
);
CREATE TABLE update_audit_user_zuid_idx(
user_zuid bigint,
scopeid bigint,
formid bigint,
time timestamp
PRIMARY KEY ((user_zuid), scopeid, formid, time)
);
Cassandra secondary indexes are implemented as local indexes rather than being distributed like normal tables. Each node only stores an index for the data it stores.
Consider the following query:
select * from update_audit where scopeid=35 and formid=78005 and record_link_id=9897;
How will this query execute 'under the hood' in Cassandra?
How will a high-cardinality column index (record_link_id) affect its performance?
Will Cassandra touch all nodes for the above query? Why?
Which criteria will be executed first, base table partition_key or secondary index partition_key? How will Cassandra intersect these two results?
| select * from update_audit where scopeid=35 and formid=78005 and record_link_id=9897;
How the above query will work internally in cassandra?
Essentially, all data for partition scopeid=35 and formid=78005 will be returned, and then filtered by the record_link_id index. It will look for the record_link_id entry for 9897, and attempt to match-up entries that match the rows returned where scopeid=35 and formid=78005. The intersection of the rows for the partition keys and the index keys will be returned.
How high-cardinality column (record_link_id)index will affect the query performance for the above query?
High-cardinality indexes essentially create a row for (almost) each entry in the main table. Performance is affected, because Cassandra is designed to perform sequential reads for query results. An index query essentially forces Cassandra to perform random reads. As cardinality of your indexed value increases, so does the time it takes to find the queried value.
Does cassandra will touch all nodes for the above query? WHY?
No. It should only touch a node that is responsible for the scopeid=35 and formid=78005 partition. Indexes likewise are stored locally, only contain entries that are valid for the local node.
creating index over high-cardinality columns will be the fastest and best data model
The problem here is that approach does not scale, and will be slow if update_audit is a large dataset. MVP Richard Low has a great article on secondary indexes(The Sweet Spot For Cassandra Secondary Indexing), and particularly on this point:
If your table was significantly larger than memory, a query would be very slow even to return just a few thousand results. Returning potentially millions of users would be disastrous even though it would appear to be an efficient query.
...
In practice, this means indexing is most useful for returning tens, maybe hundreds of results. Bear this in mind when you next consider using a secondary index.
Now, your approach of first restricting by a specific partition will help (as your partition should certainly fit into memory). But I feel the better-performing choice here would be to make record_link_id a clustering key, instead of relying on a secondary index.
Edit
How does having index on low cardinality index when there are millions of users scale even when we provide the primary key
It will depend on how wide your rows are. The tricky thing about extremely low cardinality indexes, is that the % of rows returned is usually greater. For instance, consider a wide-row users table. You restrict by the partition key in your query, but there are still 10,000 rows returned. If your index is on something like gender, your query will have to filter-out about half of those rows, which won't perform well.
Secondary indexes tend to work best on (for lack of a better description) "middle of the road" cardinality. Using the above example of a wide-row users table, an index on country or state should perform much better than an index on gender (assuming that most of those users don't all live in the same country or state).
Edit 20180913
For your answer to 1st question "How the above query will work internally in cassandra?", do you know what's the behavior when query with pagination?
Consider the following diagram, taken from the Java Driver documentation (v3.6):
Basically, paging will cause the query to break itself up and return to the cluster for the next iteration of results. It'd be less likely to timeout, but performance will trend downward, proportional to the size of the total result set and the number of nodes in the cluster.
TL;DR; The more requested results spread over more nodes, the longer it will take.
| Cassandra | 29,692,738 | 54 |
Just learning cassandra, is there a way to insert a UUID using CQL, ie
create table stuff (uid uuid primary key, name varchar);
insert into stuff (name) values('my name'); // fails
insert into stuff (uid, name) values(1, 'my name'); // fails
Can you do something like
insert into stuff (uid, name) values(nextuid(), 'my name');
| As of Cassandra 2.0.7 you can just use uuid(), which generates a random type 4 UUID:
INSERT INTO users(uid, name) VALUES(uuid(), 'my name');
| Cassandra | 17,945,341 | 53 |
What is the difference between UPDATE and INSERT when executing CQL against Cassandra?
It looks like there used to be no difference, but now the documentation says that INSERT does not support counters while UPDATE does.
Is there a "preferred" method to use? Or are there cases where one should be used over the other?
Thanks so much!
| There is a subtle difference. Inserted records via INSERT remain if you set all non-key fields to null. Records inserted via UPDATE go away if you set all non-key fields to null.
Try this:
CREATE TABLE T (
pk int,
f1 int,
PRIMARY KEY (pk)
);
INSERT INTO T (pk, f1) VALUES (1, 1);
UPDATE T SET f1=2 where pk=2;
SELECT * FROM T;
Returns:
pk | f1
----+----
1 | 1
2 | 2
Now, update each row setting f1 to null.
UPDATE T SET f1 = null WHERE pk = 1;
UPDATE T SET f1 = null WHERE pk = 2;
SELECT * FROM T;
Note that row 1 remains, while row 2 is removed.
pk | f1
----+------
1 | null
If you look at these using Cassandra-cli, you will see a different in how the rows are added.
I'd sure like to know whether this is by design or a bug and see this behavior documented.
| Cassandra | 16,532,227 | 50 |
Would like to populate some static test data via a CQLsh script.
This doesn't work: (device_id is UUID)
insert into devices (device_id, geohash,name, external_identifier, measures, tags)
values ('c37d661d-7e61-49ea-96a5-68c34e83db3a','9q9p3yyrn1', 'Acme1', '936', {'aparPower','actPower','actEnergy'},{'make':'Acme'});
Bad Request: Invalid STRING constant
(c37d661d-7e61-49ea-96a5-68c34e83db3a) for device_id of type uuid
I can't seem to find any CQL function to convert to proper type. Do I need to do this from a python script?
Thanks,
Chris
| You shouldn't put the quotes around the UUID to stop it being interpreted as a string i.e.
insert into devices (device_id, geohash,name, external_identifier, measures, tags)
values
(c37d661d-7e61-49ea-96a5-68c34e83db3a,'9q9p3yyrn1', 'Acme1', '936', {'aparPower','actPower','actEnergy'},{'make':'Acme'});
| Cassandra | 19,055,630 | 50 |
There is a big database, 1,000,000,000 rows, called threads (these threads actually exist, I'm not making things harder just because of I enjoy it). Threads has only a few stuff in it, to make things faster: (int id, string hash, int replycount, int dateline (timestamp), int forumid, string title)
Query:
select * from thread where forumid = 100 and replycount > 1 order by dateline desc limit 10000, 100
Since that there are 1G of records it's quite a slow query. So I thought, let's split this 1G of records in as many tables as many forums(category) I have! That is almost perfect. Having many tables I have less record to search around and it's really faster. The query now becomes:
select * from thread_{forum_id} where replycount > 1 order by dateline desc limit 10000, 100
This is really faster with 99% of the forums (category) since that most of those have only a few of topics (100k-1M). However because there are some with about 10M of records, some query are still to slow (0.1/.2 seconds, to much for my app!, I'm already using indexes!).
I don't know how to improve this using MySQL. Is there a way?
For this project I will use 10 Servers (12GB ram, 4x7200rpm hard disk on software raid 10, quad core)
The idea was to simply split the databases among the servers, but with the problem explained above that is still not enought.
If I install cassandra on these 10 servers (by supposing I find the time to make it works as it is supposed to) should I be suppose to have a performance boost?
What should I do? Keep working with MySQL with distributed database on multiple machines or build a cassandra cluster?
I was asked to post what are the indexes, here they are:
mysql> show index in thread;
PRIMARY id
forumid
dateline
replycount
Select explain:
mysql> explain SELECT * FROM thread WHERE forumid = 655 AND visible = 1 AND open <> 10 ORDER BY dateline ASC LIMIT 268000, 250;
+----+-------------+--------+------+---------------+---------+---------+-------------+--------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+------+---------------+---------+---------+-------------+--------+-----------------------------+
| 1 | SIMPLE | thread | ref | forumid | forumid | 4 | const,const | 221575 | Using where; Using filesort |
+----+-------------+--------+------+---------------+---------+---------+-------------+--------+-----------------------------+
| You should read the following and learn a little bit about the advantages of a well designed innodb table and how best to use clustered indexes - only available with innodb !
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
http://www.xaprb.com/blog/2006/07/04/how-to-exploit-mysql-index-optimizations/
then design your system something along the lines of the following simplified example:
Example schema (simplified)
The important features are that the tables use the innodb engine and the primary key for the threads table is no longer a single auto_incrementing key but a composite clustered key based on a combination of forum_id and thread_id. e.g.
threads - primary key (forum_id, thread_id)
forum_id thread_id
======== =========
1 1
1 2
1 3
1 ...
1 2058300
2 1
2 2
2 3
2 ...
2 2352141
...
Each forum row includes a counter called next_thread_id (unsigned int) which is maintained by a trigger and increments every time a thread is added to a given forum. This also means we can store 4 billion threads per forum rather than 4 billion threads in total if using a single auto_increment primary key for thread_id.
forum_id title next_thread_id
======== ===== ==============
1 forum 1 2058300
2 forum 2 2352141
3 forum 3 2482805
4 forum 4 3740957
...
64 forum 64 3243097
65 forum 65 15000000 -- ooh a big one
66 forum 66 5038900
67 forum 67 4449764
...
247 forum 247 0 -- still loading data for half the forums !
248 forum 248 0
249 forum 249 0
250 forum 250 0
The disadvantage of using a composite key is that you can no longer just select a thread by a single key value as follows:
select * from threads where thread_id = y;
you have to do:
select * from threads where forum_id = x and thread_id = y;
However, your application code should be aware of which forum a user is browsing so it's not exactly difficult to implement - store the currently viewed forum_id in a session variable or hidden form field etc...
Here's the simplified schema:
drop table if exists forums;
create table forums
(
forum_id smallint unsigned not null auto_increment primary key,
title varchar(255) unique not null,
next_thread_id int unsigned not null default 0 -- count of threads in each forum
)engine=innodb;
drop table if exists threads;
create table threads
(
forum_id smallint unsigned not null,
thread_id int unsigned not null default 0,
reply_count int unsigned not null default 0,
hash char(32) not null,
created_date datetime not null,
primary key (forum_id, thread_id, reply_count) -- composite clustered index
)engine=innodb;
delimiter #
create trigger threads_before_ins_trig before insert on threads
for each row
begin
declare v_id int unsigned default 0;
select next_thread_id + 1 into v_id from forums where forum_id = new.forum_id;
set new.thread_id = v_id;
update forums set next_thread_id = v_id where forum_id = new.forum_id;
end#
delimiter ;
You may have noticed I've included reply_count as part of the primary key which is a bit strange as (forum_id, thread_id) composite is unique in itself. This is just an index optimisation which saves some I/O when queries that use reply_count are executed. Please refer to the 2 links above for further info on this.
Example queries
I'm still loading data into my example tables and so far I have a loaded approx. 500 million rows (half as many as your system). When the load process is complete I should expect to have approx:
250 forums * 5 million threads = 1250 000 000 (1.2 billion rows)
I've deliberately made some of the forums contain more than 5 million threads for example, forum 65 has 15 million threads:
forum_id title next_thread_id
======== ===== ==============
65 forum 65 15000000 -- ooh a big one
Query runtimes
select sum(next_thread_id) from forums;
sum(next_thread_id)
===================
539,155,433 (500 million threads so far and still growing...)
under innodb summing the next_thread_ids to give a total thread count is much faster than the usual:
select count(*) from threads;
How many threads does forum 65 have:
select next_thread_id from forums where forum_id = 65
next_thread_id
==============
15,000,000 (15 million)
again this is faster than the usual:
select count(*) from threads where forum_id = 65
Ok now we know we have about 500 million threads so far and forum 65 has 15 million threads - let's see how the schema performs :)
select forum_id, thread_id from threads where forum_id = 65 and reply_count > 64 order by thread_id desc limit 32;
runtime = 0.022 secs
select forum_id, thread_id from threads where forum_id = 65 and reply_count > 1 order by thread_id desc limit 10000, 100;
runtime = 0.027 secs
Looks pretty performant to me - so that's a single table with 500+ million rows (and growing) with a query that covers 15 million rows in 0.02 seconds (while under load !)
Further optimisations
These would include:
partitioning by range
sharding
throwing money and hardware at it
etc...
hope you find this answer helpful :)
| Cassandra | 4,419,499 | 48 |
I have recently started working with Cassandra Database. Now I am in the process of evaluating which Cassandra client we should go forward with.
I have seen various post on stackoverflow about which client to use for Cassandra but none has very definitive answer.
My team has asked me to do some research on this and come up with certain pros and cons for each Cassandra Client API’s in Java.
As I mentioned, I recently got involved with Cassandra so not have that much idea why certain people choose Pelops client and why certain people go with Astyanax and some other clients.
I know brief things about each of the Cassandra clients, by which I mean I am able to make that work and start reading and writing to Cassandra database.
Below is the information I have so far.
CASSANDRA APIS
Hector (Production-Ready)
The most stable of the Java APIs, ready for prime-time.
Astyanax (The Up and Comer)
A clean Java API from Netflix. It isn't as widely used as Hector, but it is solid.
Kundera (The NoSQL ORM)
JPA compliant, this is handy when you want to interact with Cassandra via objects.
This constrains you somewhat in that you won't be able to have a dynamic number of
columns/names, etc. But it does allow you to port over ORMs, or centralize storage
onto Cassandra for more traditional uses.
Pelops
I've only used Pelops briefly. It was a straight forward API, but didn't seem to
have the momentum behind it.
PlayORM (ORM without the constraints?)
I just heard about this. It looks like it is trying to solve the impedance
mismatch between traditional JPA-based ORMs and NoSQL by introducing JQL. It looks
promising.
Thrift (Avoid Me!)
This is the "low-level" API.
Below are our priorities in deciding Cassandra Client-
First priorities are: low latency overhead, Asynch API, and reliability/stability for production environment.
(e.g. a more user-friendly APIs that can be had in the DAL that wraps the client).
Connection pooling and partition awareness are some other good feature to have.
Able to detect any new nodes that got added.
Good Support as well (as pointed by dean below)
Can anyone provide some thoughts on this? And also any pros and cons for each Cassandra Client and also which client can fulfill my requirements will be of great help as well.
I believe, mainly I will be revolving around Astyanax client or New Datastax client that uses Binary protocol I guess basis on my research so far. But don't have certain information to back my research and present it to my team.
Any comparison between Astyanax client and New Datastax client(which uses new Binary protocol) will be of great help.
It will be of great help to me in my research and will get lot of knowledge on this from different people who have used different clients in the past.
| Thrift is becoming more of a legacy API:
First, you should be aware that the Thrift API is not going to be getting new features ; it's there for backwards compatibility, and not recommended for new projects.
- the paul
So I'd avoid Thrift based APIs (thrift is only kept for backwards compatibility).
In saying that if you do need to use a thrift based API I'd go for Astyanax.
Astyanax is very easy to use (compared to other thrift APIs but my personal experience is that Datastax's driver is even easier).
So you should have a look at Datastax's API (and GitHub repo)? I'm not sure if there any compiled versions of the API for download but you can easily build it with Maven. Also if you take a look at the GitHub repo's commit logs it undergoes very frequent updates.
The driver works exclusively with CQL3 and is asynchronous but be warned that Cassandra 1.2 is the earliest supported version.
Performance
Astyanax is thrift based and Datastax's drive is the binary protocol. Here are the latest benchmarks I could find between thrift and CQL (note these are definitely out of date). But in fairness the small difference in performance shown in these benchmarks will rarely matter.
Asynch support
Datastax's asynch support is a definite advantage over Astyanax (Netflix tried implementing it but decided not to).
Documentation
I cant really argue against Netflix's wiki. The documentation is excellent and its updated fairly frequently. Their wiki includes code examples, and you can find tests in the source code if you need to see the code at work. I struggled to find any documentation of the Datastax driver however test are provided in the GitHub repository so that is a starting point.
Also have a look at this answer (well.. not my one anyway) It looks into some advantages/disadvantages of Thrift and CQL.
| Cassandra | 15,983,190 | 48 |
Please some one clarify for me to understand Commit Log and its use.
In Cassandra, while writing to Disk is the commit log the first entry point or MemTables.
If Memtables is what is getting flushed to disk, what is the use of Commit log, is the only purpose of commit log is to server sync issues if a data node is down?
| You can think of the commit log as an optimization, but Cassandra would be unusably slow without it. When MemTables get written to disk we call them SSTables. SSTables are immutable, meaning once Cassandra writes them to disk it does not update them. So when a column changes Cassandra needs to write a new SSTable to disk. If Cassandra was writing these SSTables to disk on every update it would be completely IO bound and very slow.
So Cassandra uses a few tricks to get better performance. Instead of writing SSTables to disk on every column update, it keeps the updates in memory and flushes those changes to disk periodically to keep the IO to a reasonable level. But this leads to the obvious problem that if the machine goes down or Cassandra crashes you would lose data on that node. To avoid losing data, in addition to keeping recent changes in memory, Cassandra writes the changes to its CommitLog.
You may be asking why is writing to the CommitLog any better than just writing the SSTables. The CommitLog is optimized for writing. Unlike SSTables which store rows in sorted order, the CommitLog stores updates in the order which they were processed by Cassandra. The CommitLog also stores changes for all the column families in a single file so the disk doesn't need to do a bunch of seeks when it is receiving updates for multiple column families at the same time.
Basically writting the CommitLog to the disk is better because it has to write less data than writing SSTables does and it writes all that data to a single place on disk.
Cassandra keeps track of what data has been flushed to SSTables and is able to truncate the Commit log once all data older than a certain point has been written.
When Cassandra starts up it has to read the commit log back from that last known good point in time (the point at which we know all previous writes were written to an SSTable). It re-applies the changes in the commit log to its MemTables so it can get into the same state when it stopped. This process can be slow so if you are stopping a Cassandra node for maintenance it is a good idea to use nodetool drain before shutting it down which will flush everything in the MemTables to SSTables and make the amount of work on startup a lot smaller.
| Cassandra | 34,592,948 | 48 |
Is there any robust way of implementing Cassandra back end to a web application developed using Django web framework?
| http://github.com/twissandra/twissandra is a django app with a Cassandra backend. It uses https://github.com/datastax/python-driver to talk to Cassandra.
| Cassandra | 2,369,793 | 47 |
What cqlsh command can I use to quickly see the keyspaces in a cluster? cqlsh does not provide show keyspaces and describe cluster isn't as concise as I want.
I'm working using the following specifications:
cqlsh 2.2.0, Cassandra 1.1.10, CQL spec 2.0.0, Thrift protocol 19.33.0
| Very simple. Just enter this command in your cqlsh shell and enjoy
select * from system.schema_keyspaces;
In C*3.x, we can simply use
describe keyspaces
| Cassandra | 16,612,463 | 47 |
Using two databases to illustrate this example: CouchDB and Cassandra.
CouchDB
CouchDB uses a B+ Tree for document indexes (using a clever modification to work in their append-only environment) - more specifically as documents are modified (insert/update/delete) they are appended to the running database file as well as a full Leaf -> Node path from the B+ tree of all the nodes effected by the updated revision right after the document.
These piece-mealed index revisions are inlined right alongside the modifications such that the full index is a union of the most recent index modifications appended at the end of the file along with additional pieces further back in the data file that are still relevant and haven't been modified yet.
Searching the B+ tree is O(logn).
Cassandra
Cassandra keeps record keys sorted, in-memory, in tables (let's think of them as arrays for this question) and writes them out as separate (sorted) sorted-string tables from time to time.
We can think of the collection of all of these tables as the "index" (from what I understand).
Cassandra is required to compact/combine these sorted-string tables from time to time, creating a more complete file representation of the index.
Searching a sorted array is O(logn).
Question
Assuming a similar level of complexity between either maintaining partial B+ tree chunks in CouchDB versus partial sorted-string indices in Cassandra and given that both provide O(logn) search time which one do you think would make a better representation of a database index and why?
I am specifically curious if there is an implementation detail about one over the other that makes it particularly attractive or if they are both a wash and you just pick whichever data structure you prefer to work with/makes more sense to the developer.
Thank you for the thoughts.
| When comparing a BTree index to an SSTable index, you should consider the write complexity:
When writing randomly to a copy-on-write BTree, you will incur random reads (to do the copy of the leaf node and path). So while the writes my be sequential on disk, for datasets larger than RAM, these random reads will quickly become the bottle neck. For a SSTable-like index, no such read occurs on write - there will only be the sequential writes.
You should also consider that in the worse case, every update to a BTree could incur log_b N IOs - that is, you could end up writing 3 or 4 blocks for every key. If key size is much less than block size, this is extremely expensive. For an SSTable-like index, each write IO will contain as many fresh keys as it can, so the IO cost for each key is more like 1/B.
In practice, this make SSTable-like thousands of times faster (for random writes) than BTrees.
When considering implementation details, we have found it a lot easier to implement SSTable-like indexes (almost) lock-free, where as locking strategies for BTrees has become quite complicated.
You should also re-consider your read costs. You are correct than a BTree is O(log_b N) random IOs for random point reads, but a SSTable-like index is actually O(#sstables . log_b N). Without an decent merge scheme, #sstables is proportional to N. There are various tricks to get round this (using Bloom Filters, for instance), but these don't help with small, random range queries. This is what we found with Cassandra:
Cassandra under heavy write load
This is why Castle, our (GPL) storage engine, does merges slightly differently, and can achieve a lot better (O(log^2 N)) range queries performance with a slight trade off in write performance (O(log^2 N / B)). In practice we find it to be quicker than Cassandra's SSTable index for writes as well.
If you want to know more about this, I've given a talk about how it works:
podcast
slides
| Cassandra | 8,651,346 | 46 |
What's the meaning of the frozen keyword in Cassandra?
I'm trying to read this documentation page: Using a user-defined type, but their explanation for the frozen keyword (which they use in their examples) is not clear enough for me:
To support future capabilities, a column definition of a user-defined
or tuple type requires the frozen keyword. Cassandra serializes a
frozen value having multiple components into a single value. For
examples and usage information, see "Using a user-defined type",
"Tuple type", and Collection type.
I haven't found any other definition or a clear explanation for that in the net.
| In Cassandra if you define UDT or Collection as frozen, you can't update UDT's or collection's individual item, you have to reinsert with full value.
A frozen value serializes multiple components into a single value. Non-frozen types allow updates to individual fields. Cassandra treats the value of a frozen type as a blob. The entire value must be overwritten.
Source : https://docs.datastax.com/en/cql/3.1/cql/cql_reference/collection_type_r.html
@Alon : "Long story short: frozen = immutable"
| Cassandra | 42,532,743 | 46 |
Dynamo-like databases (e.g. Cassandra) can enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+R>N where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage.
What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+R>N schema?
| Yes, Paxos provides guarantees that are not provided by the Dynamo-like systems and their read-write quorums. The difference is how failures are handled and what happens during a write. After a successful write, both kind of systems behave similarly. The data will be saved and available for reading afterwards (until overwritten or deleted) and so on.
The difference appears during a write and after failures. Until you get a successful answer from W nodes when writing something to the eventually consistent systems, then the data may have been written to some nodes and not to others and there is no guarantee that the whole system agrees on the current value. If you try to read the data back at this point, some clients may get the new data back and some the old data back. In other words, the system is not immediately consistent. This is because writes aren't atomic across nodes in these systems. There are usually mechanisms to "heal" an inconsistency like this and "eventually" the system will become consistent again (i.e. reads will once again always return the same value, until something new is written). This is the reason why they are often called "eventually consistent". Inconsistencies can (and will) appear, but they will always be dealt with and reconciled eventually.
With Paxos, writes can be made atomic across nodes and inconsistencies between nodes are therefore possible to avoid. The Paxos algorithm makes it possible to guarantee that non-faulty nodes never disagree on the outcome of a write, at any point in time. Either the write succeeded everywhere or nowhere. There will never be any inconsistent reads at any point (if it's correctly implemented and if all the assumptions hold, of course). This comes at a cost, however. Mainly, the system may need to delay some requests and be unavailable when for example too many nodes (or the communication between them) aren't working. This is necessary to assure that no inconsistent replies are given.
To summarize: the main difference is that the Dynamo-like systems can return inconsistent results during writes or after failures for some time (but will eventually recover from it), whereas Paxos based systems can guarantee that there are never any such inconsistencies by sometimes being unavailable and delaying requests instead.
| Cassandra | 12,156,517 | 45 |
I have a Cassandra cluster with three nodes, two of which are up. They are all in the same DC. When my Java application goes to write to the cluster, I get an error in my application that seems to be caused by some problem with Cassandra:
Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough replica available for query at consistency ONE (1 required but only 0 alive)
at com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:79)
The part that doesn't make sense is that "1 required but only 0 alive" statement. There are two nodes up, which means that one should be "alive" for replication.
Or am I misunderstanding the error message?
Thanks.
| You are likely getting this error because the Replication Factor of the keyspace the table you are querying belongs to has a Replication Factor of one, is that correct?
If the partition you are reading / updating does not have enough available replicas (nodes with that data) to meet the consistency level, you will get this error.
If you want to be able to handle more than 1 node being unavailable, what you could do is look into altering your keyspace to set a higher replication factor, preferably three in this case, and then running a nodetool repair on each node to get all of your data on all nodes. With this change, you would be able to survive the loss of 2 nodes to read at a consistency level of one.
This cassandra parameters calculator is a good reference for understanding the considerations of node count, replication factor, and consistency levels.
| Cassandra | 27,974,911 | 45 |
I am wondering how can I achieve pagination using Cassandra.
Let us say that I have a blog. The blog lists max 10 posts per page. To access next posts a user must click on pagination menu to access page 2 (posts 11-20), page 3 (posts 21-30), etc.
Using SQL under MySQL, I could do the following:
SELECT * FROM posts LIMIT 20,10;
The first parameter of LIMIT is offset from the beginning of result set and second argument is amount of rows to fetch. The example above returns 10 rows starting from row 20.
How can I achieve the same effect in CQL?
I have found some solutions on Google, but all of them require to have "the last result from previous query". It works for having "next" button to paginate to another 10-results-set, but what if I want to jump from page 1 to page 5?
| You don't need to use tokens, if you are using Cassandra 2.0+.
Cassandra 2.0 has auto paging.
Instead of using token function to create paging, it is now a built-in feature.
Now developers can iterate over the entire result set, without having to care that it’s size is larger than the memory. As the client code iterates over the results, some extra rows can be fetched, while old ones are dropped.
Looking at this in Java, note that SELECT statement returns all rows, and the number of rows retrieved is set to 100.
I’ve shown a simple statement here, but the same code can be written with a prepared statement, couple with a bound statement. It is possible to disable automatic paging, if it is not desired. It is also important to test various fetch size settings, since you will want to keep the memorize small enough, but not so small that too many round-trips to the database are taken. Check out this blog post to see how paging works server side.
Statement stmt = new SimpleStatement(
"SELECT * FROM raw_weather_data"
+ " WHERE wsid= '725474:99999'"
+ " AND year = 2005 AND month = 6");
stmt.setFetchSize(24);
ResultSet rs = session.execute(stmt);
Iterator<Row> iter = rs.iterator();
while (!rs.isFullyFetched()) {
rs.fetchMoreResults();
Row row = iter.next();
System.out.println(row);
}
| Cassandra | 26,757,287 | 44 |
I get the following message when executing cqlsh.bat on the command line
Connection error: ('Unable to connect to any servers', {'127.0.0.1': ProtocolError("cql_version '3.3.0' is not supported by remote (w/ native protocol). Supported versions: [u'3.2.0']",)})
I'm running Python version 2.7.10 along with Cassandra version 2.2.1. Not sure if it's related but when I start the Cassandra server I need to run "Set-ExecutionPolicy Unrestricted" on PowerShell or else it doesn't work.
| You can force cqlsh to use a specific cql version using the flag
--cqlversion="#.#.#"
Example cqlsh usage (and key/values):
cqlsh 12.34.56.78 1234 -u username -p password --cqlversion="3.2.0"
cqlsh (IP ADDR) (PORT) (DB_USERN) (DB_PASS) (VER)
| Cassandra | 33,002,404 | 44 |
How can I get the name of the datacenter in cqlsh?
It's required for the constructor of DCAwareRoundRobinPolicy.
| cqlsh> use system;
cqlsh:system> select data_center from local;
data_center
-------------
datacenter1
| Cassandra | 19,489,498 | 43 |
Both Aerospike and Cassandra says they are better than the other in their own respective benchmarks.
Reference : http://java.dzone.com/articles/benchmarking-cassandra-right and a few others.
Has anyone used both of them?
Is Aerospike as good as claimed?
Finally is it advisable to replace Cassandra with Aerospike?
| Choosing between Cassandra and Aerospike really depends on your use case more than anything. I have personally used both as a production system for the same project and for me Aerospike was the clear winner but that's because our use case is to have highly concurrent, low latency, transactional, small updates to billions of entries with ~10x more read than write volume. This is what Aerospike excels at, it has the minimal latency I have ever seen in a database of its kind even when using an SSD namespace. For these reasons Aerospike was the clear choice for us.
On the other hand, Cassandra is better for high write volume and can handle larger records. Everything is page based so it operates well on non-SSDs but can never give you the extreme low latency that Aerospike can unless your records fit into the cache. Its also worth noting that Cassandra is much harder to maintain from an operations perspective than Aerospike is. For us personally it was an operations nightmare and I know that Netflix has to employ a sizable team of operations engineers solely to manage their Cassandra clusters. Also while the system may have matured more by now, when we were using it (around the 1.0 version) we would hit strange occasional assert errors and exceptions that stop internal db actions from taking place and typically had to wipe the data from those nodes in order to fix it every time.
Another factor here is cost which may or may not play into your decision depending on your application. The larger the keyspace the more expensive your Aerospike cluster will be from a hardware perspective. All keys need to be stored in memory regardless of whether it is an in memory or ssd namespace. Once you get into the billions of keys range you will need terabytes of ram in your cluster to support that with a replication factor of 2. Cassandra obviously does not have this issue since the keys and values are both stores on disk.
To answer your second 2 questions, yes it is as good as it claims, we store about 5B keys and do ~1M TPS at peak load and it does it without breaking a sweat (although it takes almost 20 nodes per cluster to do this with 120GB ram each). And as for is it advisable to replace Cassandra with Aerospike, for us it was a definite win and the right decision. If your application fits the design of Aerospike and it works out to be cost effective then it is definitely advisable to make the switch. When it comes down to it though its about your use case. If its not clear which one is the better fit for you then try them both and see how they play out. Good luck.
Edit:
One of the reasons currently to choose Cassandra over Aerospike is for when applications need certain consistency guarantees. For applications such as counters for example, Aerospike can become in an inconsistent state due to a network partition whereas Cassandra is capable of these through the use of conflict free replicated data types (CRDT). On a good network and also for many use cases in general this isn't an issue, but as stated earlier the performance of Aerospike can't be beaten and that's typically why it is chosen.
Edit 2:
Aerospike v4 has now introduced their version of a consistent mode (verified by Jepsen: https://jepsen.io/analyses/aerospike-3-99-0-3). Additionally Aerospike has implemented it through strong consistency whereas Cassandra only has eventual consistency through the use of CRDTs so it's still possible to read stale data. Also from personal testing I can say that the performance during normal operation did not suffer for our use case when using their strongly consistent mode.
| Cassandra | 25,443,933 | 43 |
How does Voldemort compare to Cassandra?
I'm not talking about size of community and only want to hear from people who have actually used both.
Especially I'm interested in:
How they dynamically scale when adding and removing nodes
Query performance
How they scale when adding nodes (linear)?
Write speed
| Voldemort's support for adding nodes was just added recently (this month). So I would expect Cassandra's to be more robust given the longer time to cook and a larger community testing.
Both are fast (> 10k ops/s per machine). Because of their storage designs, I would expect Cassandra to be faster at writes, and Voldemort to be faster at reads. I would also expect Cassandra's performance to degrade less as the amount of data per node increases. And of course if you need more than just a key/value data model Cassandra's ColumnFamily model wins.
I don't know of any head-to-head benchmarks since the one done for NoSQL SF last June, which found Cassandra to be somewhat faster at whatever workload mix he was using. (The "vpork" talk from http://blog.oskarsson.nu/2009/06/nosql-debrief.html) 8 months is an eternity with projects under this much development, though.
| Cassandra | 2,252,163 | 42 |
What are the strengths and weaknesses of the various NoSQL databases available?
In particular, it seems like Redis is weak when it comes to distributing write load over multiple servers. Is that the case? Is it a big problem? How big does a service have to grow before that could be a significant problem?
| The strengths and weaknesses of the NoSQL databases (and also SQL databases) is highly dependent on your use case. For very large projects, performance is king; but for brand new projects, or projects where time and money are limited, simplicity and time-to-market are probably the most important. For teaching yourself (broadening your perspective, becoming a better, more valuable programmer), perhaps the most important thing is simple, solid fundamental concepts.
What kind of project do you have in mind?
Some strengths and weaknesses, off the top of my head:
Redis
Very simple key-value "global variable server"
Very simple (some would say "non-existent") query system
Easily the fastest in this list
Transactions
Data set must fit in memory
Immature clustering, with unclear future (I'm sure it'll be great, but it's not yet decided.)
Cassandra
Arguably the most community momentum of the BigTable-like databases
Probably the easiest of this list to manage in big/growing clusters
Support for map/reduce, good for analytics, data warehousing
MUlti-datacenter replication
Tunable consistency/availability
No single point of failure
You must know what queries you will run early in the project, to prepare the data shape and indexes
CouchDB
Hands-down the best sync (replication) support, supporting master/slave, master/master, and more exotic architectures
HTTP protocol, browsers/apps can interact directly with the DB partially or entirely. (Sync is also done over HTTP)
After a brief learning curve, pretty sophisticated query system using Javascript and map/reduce
Clustered operation (no SPOF, tunable consistency/availability) is currently a significant fork (BigCouch). It will probably merge into Couch but there is no roadmap.
Similarly, clustering and multi-datacenter are theoretically possible (the "exotic" thing I mentioned) however you must write all that tooling yourself at this time.
Append only file format (both databases and indexes) consumes disk surprisingly quickly, and you must manually run compaction (vacuuming) which makes a full copy of all records in the database. The same is required for each index file. Again, you have to be your own toolsmith.
| Cassandra | 4,720,508 | 42 |
Why is using HBase a better choice than using Cassandra with Hadoop?
Can anyone please give a detailed explanation on this?
Thanks
| I don't think either is better than the others, it's not just one or the other. These are very different systems, each with their strengths and weaknesses, so it really depends on your use cases. They can definitely be used in complement of one another in the same infrastructure.
To explain the difference better I'd like to borrow a picture from Cassandra: the Definitive Guide, where they go over the CAP theorem. What they say is basically for any distributed system, you have to find a balance between consistency, availability and partition tolerance, and you can only realistically satisfy 2 of these properties. From that you can see that:
Cassandra satisfies the Availability and Partition Tolerance properties.
HBase satisfied the Consistency and Partition Tolerance properties.
When it comes to Hadoop, HBase is built on top of HDFS, which makes it pretty convenient to use if you already have a Hadoop stack. It is also supported by Cloudera, which is a standard enterprise distribution for Hadoop.
But Cassandra also has more integration with Hadoop, namely Datastax Brisk which is gaining popularity. You can also now natively stream data from the output of a Hadoop job into a Cassandra cluster using some Cassandra-provided output format (BulkOutputFormat for example), we are no longer to the point where Cassandra was just a standalone project.
In my experience, I've found that Cassandra is awesome for random reads, and not so much for scans
To put a little color to the picture, I've been using both at my job in the same infrastructure, and HBase has a very different purpose than Cassandra. I've used Cassandra mostly for real-time very fast lookups, while I've used HBase more for heavy ETL batch jobs with lower latency requirements.
This is a question that would truly be worthy of a blog post, so instead of going on and on I'd like to point you to an article which sums up a lot of the keys differences between the 2 systems. Bottom line is, there is no superior solution IMHO, and you should really think about your use cases to see which system is better suited.
| Cassandra | 14,950,728 | 42 |
Question about Cassandra
Why the hell on earth would anybody write a database ENGINE in Java ?
I can understand why you would want to have a Java interface, but the engine...
I was under the impression that there's nothing faster than C/C++, and that a database engine shouldn't be any slower than max speed, and certainly not use garbage collection...
Can anybody explain me what possible sense that makes / why Cassandra can be faster than ordinary SQL that runs on C/C++ code ?
Edit:
Sorry for the "Why the hell on earth" part, but it really didn't make any sense to me.
I neglected to consider that a database, unlike the average garden-varitety user programs, needs to be started only once and then runs for a very long time, and probably also as the only program on the server, which self-evidently makes for an important performance difference.
I was more comparing/referencing to a 'disfunctional' (to put it mildly) Java tax program I was using at the time of writing (or rather would have liked to use).
In fact, unlike using Java for tax programs, using Java for writing a dedicated server program makes perfect sense.
| What do you mean, C++? Hand coded assembly would be faster if you have a few decades to spare.
| Cassandra | 2,341,866 | 41 |
While doing a bulk load of data, incrementing counters based on log data, I am encountering a timeout exception. Im using the Datastax 2.0-rc2 java driver.
Is this an issue with the server not being able to keep up (ie server side config issue), or is this an issue with the client getting bored waiting for the server to respond? Either way, is there an easy config change I can make that would fix this?
Exception in thread "main" com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)
at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:54)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:271)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:187)
at com.datastax.driver.core.Session.execute(Session.java:126)
at jason.Stats.analyseLogMessages(Stats.java:91)
at jason.Stats.main(Stats.java:48)
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)
at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:54)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:92)
at com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:122)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:224)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:373)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:510)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:53)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:33)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:165)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
... 21 more
One of the nodes reports this at roughly the time it occured:
ERROR [Native-Transport-Requests:12539] 2014-02-16 23:37:22,191 ErrorMessage.java (line 222) Unexpected exception during request
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
| While I don't understand the root cause of this issue, I was able to solve the problem by increasing the timeout value in the conf/cassandra.yaml file.
write_request_timeout_in_ms: 20000
| Cassandra | 21,819,035 | 41 |
I am trying to insert a simple row into the table. Can someone point out what is happening here ?
CREATE TABLE recommendation_engine_poc.user_by_category (
game_category text,
customer_id text,
amount double,
game_date timestamp,
PRIMARY KEY (game_category, customer_id)
) WITH CLUSTERING ORDER BY (customer_id ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
cqlsh:recommendation_engine_poc> insert into user_by_category ('game_category','customer_id') VALUES ('Goku','12') ;
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message="line 1:31 no viable alternative at input 'game_category' (insert into user_by_category (['game_categor]...)">
| Wrong syntax. Here you are:
insert into user_by_category (game_category,customer_id) VALUES
('Goku','12');
or:
insert into user_by_category ("game_category","customer_id") VALUES
('Kakarot','12');
The second one is normally used for case-sensitive column names.
| Cassandra | 32,117,170 | 41 |
I am trying to figure out exactly what these new fangled data stores such as bigtable, hbase and cassandra really are.
I work with massive amounts of stock market data, billions of rows of price/quote data that can add up to 100s of gigabytes every day (although these text files often compress by at least an order of magnitude). This data is basically a handful of numbers, two or three short strings and a timestamp (usually millisecond level). If I had to pick a unique identifier for each row, I would have to pick the whole row (since an exchange may generate multiple values for the same symbol in the same millisecond).
I suppose the simplest way to map this data to bigtable (I'm including its derivatives) is by symbol name and date (which may return a very large time series, more than million data points isn't unheard of). From reading their descriptions, it looks like multiple keys can be used with these systems. I'm also assuming that decimal numbers are not good candidates for keys.
Some of these systems (Cassandra, for example) claims to be able to do range queries. Would I be able to efficiently query, say, all values for MSFT, for a given day, between 11:00 am and 1:30 pm ?
What if I want to search across ALL symbols for a given day, and request all symbols that have a price between $10 and $10.25 (so I'm searching the values, and want keys returned as a result)?
What if I want to get two times series, subtract one from the other, and return the two times series and their result, will I have to do his logic in my own program?
Reading relevant papers seems to show that these systems are not a very good fit for massive time series systems. However, if systems such as google maps are based on them, I think time series should work as well. For example, think of time as the x-axis, prices as y-axis and symbols as named locations--all of a sudden it looks like bigtable should be the ideal store for time series (if the whole earth can be stored, retrieved, zoomed and annotated, stock market data should be trivial).
Can some expert point me in the right direction or clear up any misunderstandings.
Thanks
| I am not an expert yet, but I've been playing with Cassandra for a few days now, and I have some answers for you:
Don't worry about amount of data, it's irrelevant with systems like Cassandra, if you have $$$ for a large hardware cluster.
Some of these systems (Cassandra, for example) claims to be able to do range queries. Would I be able to efficiently query, say, all values for MSFT, for a given day, between 11:00 am and 1:30 pm ?
Cassandra is very useful when you know how to work with keys. It can swift through keys very quickly. So to search for MSFT between 11:00 and 1:30pm, you'd have to key your rows like this:
MSFT-timestamp, GOOG-timestamp , ..etc
Then you can tell Cassandra to find all keys that start with MSFT-now and end with MSFT-now+1hour.
What if I want to search across ALL symbols for a given day, and request all symbols that have a price between $10 and $10.25 (so I'm searching the values, and want keys returned as a result)?
I am not an expert, but so far I realized that Cassandra doesn't' search by values at all. So if you want to do the above, you will have to make another table dedicated just to this problem and design your schema to fit the case. But it won't be much different from what I described above. It's all about naming your keys and columns. Cassandra can find them very quickly!
What if I want to get two times series, subtract one from the other, and return the two times series and their result, will I have to do his logic in my own program?
Correct, all logic is done inside your program. This is not MySQL. This is just a storage engine. (But I am sure the next versions will offer these sort of things)
Please remember, that I am a novice at this, if I am wrong, feel free to correct me.
| Cassandra | 1,623,399 | 39 |
Is there a paper/blog-post on when to use Cassandra or Membase or Hadoop or plain old relational databases ? Is there a paper discussing the strengths/weaknesses of each, and on what scenarios either of these technologies should be chosen ?
I am thinking of writing a new webservice which will have about a million hits per day and data spanning about a few terabytes.
| EDIT The NoSQL Ecosystem by Adam Marcus (from the book The Architecture of open source applications): http://www.aosabook.org/en/nosql.html
general thoughts and comparison http://www.thoughtworks.com/articles/nosql-comparison
technical comparison http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
a Master's Thesis - Analysis and Classification of NoSQL Databases http://scholar.googleusercontent.com/scholar?q=cache:rQlbiz6bojAJ:scholar.google.com/+comparison+of+nosql&hl=en&as_sdt=0,5&as_vis=1
| Cassandra | 8,120,036 | 39 |
I am going through apache cassandra and working on sample data insertion, retrieving etc.
The documentation is very limited.
I am interested in knowing
can we completely replace relation db like mysql/ oracle with cassandra?
does cassandra support rollback/ commit?
does cassandra clients (thrift/ hector) support fetching associated object (objects where we save one super columns' key in another super column family)?
This will help me a lot to proceed further.
thank you in advance.
| Short answer: No.
By design, Cassandra values availability and partition tolerance over consistency1. Basically, it's not possible to get acceptable latency while maintaining all three of qualities: one has to be sacrificed. This is called CAP theorem.
The amount of consistency is configurable in Cassandra using consistency levels, but there doesn't exist any semantics for rollback. There's no guarantee that you'll be able to roll back your changes even if the first write succeeds.
If you want to build application with transactions or locks on top of Cassandra, you probably want to look at Zookeeper, which can be used to provide distributed synchronization.
You might've already guessed this, but Cassandra doesn't have foreign keys or anything like that. This has to be handled manually. I'm not that familiar with Hector, but a higher-level client could be able to do this semi-automatically.
Whether or not you can use Cassandra to easily replace a RDBMS depends on your specific use case. In your use case (based on your questions), it might be hard to do so.
| Cassandra | 2,976,932 | 37 |
I'm a little confused about Cassandra seed nodes and how clients are meant to connect to the cluster. I can't seem to find this bit of information in the documentation.
Do the clients only contain a list of the seed node and each node delegates a new host for the client to connect to? Are seed nodes only really for node to node discovery, rather than a special node for clients?
Should each client use a small sample of random nodes in the DC to connect to?
Or, should each client use all the nodes in the DC?
| Answering my own question:
Seeds
From the FAQ:
Seeds are used during startup to discover the cluster.
Also from the DataStax documentation on "Gossip":
The seed node designation has no purpose other than bootstrapping the gossip process
for new nodes joining the cluster. Seed nodes are not a single
point of failure, nor do they have any other special purpose in
cluster operations beyond the bootstrapping of nodes.
From these details it seems that a seed is nothing special to clients.
Clients
From the DataStax documentation on client requests:
All nodes in Cassandra are peers. A client read or write request can
go to any node in the cluster. When a client connects to a node and
issues a read or write request, that node serves as the coordinator
for that particular client operation.
The job of the coordinator is to act as a proxy between the client
application and the nodes (or replicas) that own the data being
requested. The coordinator determines which nodes in the ring should
get the request based on the cluster configured partitioner and
replica placement strategy.
I gather that the pool of nodes that a client connects to can just be a handful of (random?) nodes in the DC to allow for potential failures.
| Cassandra | 10,407,072 | 37 |
I have to work with a column family that has (user_id, timestamp) as key. In my query I would like to fetch all records in a given time range independent of the user_id. This is the exact table schema:
CREATE TABLE userlog (
user_id text,
ts timestamp,
action text,
app_type text,
channel_name text,
channel_session_id text,
pid text,
region_id text,
PRIMARY KEY (user_id, ts)
)
I tried to run
SELECT * FROM userlog WHERE ts >= '2013-01-01 00:00:00+0200' AND ts <= '2013-08-13 23:59:00+0200' ALLOW FILTERING;
which works fine on my local cassandra installation containing a small data set but fails with
Request did not complete within rpc_timeout.
on the productive system containing all the data.
Is there a, preferably cql, query that runs smoothly with the given column family or de we have to change the design?
| The timeout is because Cassandra is taking longer than the timeout (default is 10 seconds) to return the data. For your query, Cassandra will attempt to fetch the entire dataset before returning. For more than a few records this can easily take longer than the timeout.
For queries that are producing lots of data you need to page e.g.
SELECT * FROM userlog WHERE ts >= '2013-01-01 00:00:00+0200' AND ts <= '2013-08-13 23:59:00+0200' AND token(user_id) > previous_token LIMIT 100 ALLOW FILTERING;
where user_id is the previous user_id returned. You will also need to page on ts to guarantee you get all the records for the last user_id returned.
Alternatively, in Cassandra 2.0.0 (just released), paging is done transparently so your original query should work with no timeout or manual paging.
The ALLOW FILTERING means Cassandra is reading through all your data, but only returning data within the range specified. This is only efficient if the range is most of the data. If you wanted to find records within e.g. a 5 minute time window, this would be very inefficient.
| Cassandra | 18,697,725 | 37 |
I was loading a large CSV into Cassandra using cassandra-loader.
The VM ran out of disk space during this process and crashed. I allocated more disk space to the VM and tried starting cassandra but it refused to start due to problems with SSTables and commit log.
I could not run nodetool repair as it is only works when the node is online.
I ran sstablescrub which took about 1 hour to finish. So I thought it might have fixed it.
But I still get this error in system.log
ERROR [SSTableBatchOpen:4] 2015-10-23 18:57:45,035 SSTableReader.java:506 - Corrupt sstable /var/lib/cassandra/data/keyspace1/location-777a33d0772911e597a98b820c5778a4/la-1709-big=[TOC.txt, CompressionInfo.db, Statistics.db, Digest.adler32, Data.db, Index.db, Filter.db]; skipping table
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:125) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:187) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:179) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:703) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:664) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:458) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:363) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:501) ~[apache-cassandra-2.2.3.jar:2.2.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) ~[na:1.8.0_60]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) ~[na:1.8.0_60]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) ~[na:1.8.0_60]
at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:96) ~[apache-cassandra-2.2.3.jar:2.2.3]
... 15 common frames omitted
INFO [main] 2015-10-23 18:57:45,193 ColumnFamilyStore.java:382 - Initializing system_auth.role_permissions
INFO [main] 2015-10-23 18:57:45,201 ColumnFamilyStore.java:382 - Initializing system_auth.resource_role_permissons_index
INFO [main] 2015-10-23 18:57:45,213 ColumnFamilyStore.java:382 - Initializing system_auth.roles
INFO [main] 2015-10-23 18:57:45,233 ColumnFamilyStore.java:382 - Initializing system_auth.role_members
INFO [main] 2015-10-23 18:57:45,240 ColumnFamilyStore.java:382 - Initializing system_traces.sessions
INFO [main] 2015-10-23 18:57:45,252 ColumnFamilyStore.java:382 - Initializing system_traces.events
INFO [main] 2015-10-23 18:57:45,265 ColumnFamilyStore.java:382 - Initializing simplex.songs
INFO [main] 2015-10-23 18:57:45,276 ColumnFamilyStore.java:382 - Initializing simplex.playlists
INFO [pool-2-thread-1] 2015-10-23 18:57:45,289 AutoSavingCache.java:187 - reading saved cache /var/lib/cassandra/saved_caches/KeyCache-ca.db
INFO [pool-2-thread-1] 2015-10-23 18:57:45,313 AutoSavingCache.java:163 - Completed loading (25 ms; 36 keys) KeyCache cache
INFO [main] 2015-10-23 18:57:45,351 CommitLog.java:168 - Replaying /var/lib/cassandra/commitlog/CommitLog-5-1445578022702.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022703.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022704.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022705.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022706.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022707.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022708.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022709.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022710.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022712.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022713.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022714.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022715.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022716.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022717.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022719.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022720.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022721.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022722.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022723.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022724.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022725.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022727.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022728.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022730.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022731.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022732.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022733.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022734.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022736.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022738.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022740.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022741.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022743.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022744.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022745.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022746.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022748.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022749.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022750.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022751.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022752.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022753.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022755.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022756.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022758.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022759.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022760.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022761.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022763.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022764.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022765.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022766.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022767.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022768.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022769.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022770.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022771.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022772.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022773.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022774.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022775.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022776.log, /var/lib/cassandra/commitlog/CommitLog-5-1445588991268.log, /var/lib/cassandra/commitlog/CommitLog-5-1445589094722.log, /var/lib/cassandra/commitlog/CommitLog-5-1445589149527.log, /var/lib/cassandra/commitlog/CommitLog-5-1445595828633.log, /var/lib/cassandra/commitlog/CommitLog-5-1445595898055.log, /var/lib/cassandra/commitlog/CommitLog-5-1445596033717.log, /var/lib/cassandra/commitlog/CommitLog-5-1445596400441.log, /var/lib/cassandra/commitlog/CommitLog-5-1445596601854.log, /var/lib/cassandra/commitlog/CommitLog-5-1445598032544.log, /var/lib/cassandra/commitlog/CommitLog-5-1445598758663.log, /var/lib/cassandra/commitlog/CommitLog-5-1445601112953.log, /var/lib/cassandra/commitlog/CommitLog-5-1445601937334.log, /var/lib/cassandra/commitlog/CommitLog-5-1445601985416.log, /var/lib/cassandra/commitlog/CommitLog-5-1445604504389.log, /var/lib/cassandra/commitlog/CommitLog-5-1445606516196.log
ERROR [main] 2015-10-23 18:59:05,091 JVMStabilityInspector.java:78 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Mutation checksum failure at 4110758 in CommitLog-5-1445578022776.log
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:622) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:492) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:388) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:147) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:273) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:513) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) [apache-cassandra-2.2.3.jar:2.2.3]
Exiting due to error while processing commit log during initialization.
How do I fix this? This is test data, I am okay with losing it. How would one handle this in production to avoid data loss?
Tried setting disk_failure_policy: ignore so that I could run nodetool repair once the server is up. But the server does not start even with this setting.
I am operating a single node and replication factor is 1. Would having more nodes and a >1 replication factor enabled me to fix an issue like this without data loss?
I am using Cassandra 2.2.3
| Since you don't care about the data, removing files from \data\commitlogs should be easiest solution.
| Cassandra | 33,304,367 | 37 |
I'm studying distributed systems and referring to this old question: stackoverflow link
I really can't understand the difference between exactly-once, at-least-once and at-most-once guarantees, I read these concepts in Kafka, Flink and Storm and Cassandra also. For instance someone says that Flink is better because has exactly-once guarantees while Storm has only at-least-once.
I understand that exactly-once mode is better for latency but at the same time it's worse for fault tolerance right? How can recover a stream if I haven't duplicates? and then... if this is a real problem, why exactly-once guarantee is considered better than others?
Someone can give me better definitions?
| Below definitions are quoted from Akka Documentation
at-most-once delivery
means that for each message handed to the mechanism, that message is
delivered zero or one times; in more casual terms it means that
messages may be lost.
at-least-once delivery
means that for each message handed to the mechanism potentially
multiple attempts are made at delivering it, such that at least one
succeeds; again, in more casual terms this means that messages may be
duplicated but not lost.
exactly-once delivery
means that for each message handed to the mechanism exactly one
delivery is made to the recipient; the message can neither be lost nor
duplicated.
The first one is the cheapest—highest performance, least implementation overhead—because it can be done in a fire-and-forget fashion without keeping state at the sending end or in the transport mechanism. The second one requires retries to counter transport losses, which means keeping state at the sending end and having an acknowledgement mechanism at the receiving end. The third is most expensive—and has consequently worst performance—because in addition to the second it requires state to be kept at the receiving end in order to filter out duplicate deliveries
| Cassandra | 44,204,973 | 37 |
In Cassandra Wiki, it is said that there is a limit of 2 billion cells (rows x columns) per partition. But it is unclear to me what is a partition?
Do we have one partition per node per column family, which would mean that the max size of a column family would be 2 billion cells * number of nodes in the cluster.
Or will Cassandra create as much partitions as required to store all the data of a column family?
I am starting a new project so I will use Cassandra 2.0.
| With the advent of CQL3 the terminology has changed slightly from the old thrift terms.
Basically
Create Table foo (a int , b int, c int, d int, PRIMARY KEY ((a,b),c))
Will make a CQL3 table. The information in a and b is used to make the partition key, this describes which node the information will reside on. This is the 'partiton' talked about in the 2 billion cell limit.
Within that partition the information will be organized by c, known as the clustering key. Together a,b and c, define a unique value of d. In this case the number of cells in a partition would be c * d. So in this example for any given pair of a and b there can only be 2 billion combinations of c and d
So as you model your data you want to ensure that the primary key will vary so that your data will be randomly distributed across Cassandra. Then use clustering keys to ensure that your data is available in the way you want it.
Watch this video for more info on Datmodeling in cassandra
The Datamodel is Dead, Long live the datamodel
Edit: One more example from the comments
Create Table foo (a int , b int, c int, d int, e int, f int, PRIMARY KEY ((a,b),c,d))
Partitions will be uniquely identified by a combination of a and b.
Within a partition c and d will be used to order cells within the partition so the layout will
look a little like:
(a1,b1) --> [c1,d1 : e1], [c1,d1 :f1], [c1,d2 : e2] ....
So in this example you can have 2 Billion cells with each cell containing:
A value of c
A value of d
A value of either e or f
So the 2 billion limit refers to the sum of unique tuples of (c,d,e) and (c,d,f).
| Cassandra | 20,512,710 | 36 |
How do you check which cql version is currently being used in cqlsh?
In sql, you do this:
Select @@version
| There are a couple of ways to go about this.
From within cqlsh, you can simply show version.
aploetz@cqlsh> show version
[cqlsh 5.0.1 | Cassandra 2.1.8 | CQL spec 3.2.0 | Native protocol v3]
However, that only works from within cqlsh. Fortunately, you can also query system.local for that information as well.
aploetz@cqlsh> SELECT cql_version FROM system.local;
cql_version
-------------
3.2.0
(1 rows)
The system.local table has other useful bits of information about your current node, cluster, and Cassandra tool versions as well. For more info, check it out with either SELECT * FROM system.local; (only has 1 row) or desc table system.local.
| Cassandra | 31,502,442 | 36 |
The LIKE condition allows us to use wildcards in the where clause of an SQL statement. This allows us to perform pattern matching. The LIKE condition can be used in any valid SQL statement - select, insert, update, or delete. Like this
SELECT * FROM users
WHERE user_name like 'babu%';
like the same above operation any query is available for Cassandra in CLI.
| Since Cassandra 3.4 (3.5 recommended), LIKE queries can be achieved using a SSTable Attached Secondary Index (SASI).
For example:
CREATE TABLE cycling.cyclist_name (
id UUID PRIMARY KEY,
lastname text,
firstname text
);
Creating the SASI as follows:
CREATE CUSTOM INDEX fn_prefix ON cyclist_name (firstname)
USING 'org.apache.cassandra.index.sasi.SASIIndex';
Then a prefix LIKE query is working:
SELECT * FROM cyclist_name WHERE firstname LIKE 'M%';
SELECT * FROM cyclist_name WHERE firstname LIKE 'Mic%';
These examples and more configuration options, like suffix queries, can be found in the documentation
A more in depth explanation about how SASI works can be found here.
| Cassandra | 9,905,795 | 35 |
I'm working on a distributed data base. I'm trying to generate a unique ID that will serve as a column family primary key in cassandra.
I read some articles about doing this with Java using UUID but it seems like there is a probability for collision (even if it's very low).
I wonder if there is a way to generate a unique ID based on time maybe?
| You can use the TimeUUID type in Cassandra, which backs a Type 1 UUID. This uses the current time and the creator's MAC address and a sequence number. If the TimeUUID number is generated correctly this can be done with zero collisions (you can use the CQL now() method or insert your own, the java SDK's provide some thread-safe implementations). The main advantage of TimeUUIDs is that the IDs can be time ordered. See http://wiki.apache.org/cassandra/TimeBaseUUIDNotes for more info.
However, the time ordering is unlikely to be useful for row primary keys, since the ordering is useless when using a hash partitioner, though possible using a clustering key. And also the complexity of generating a unique ID could be a source of bugs if you roll your own. Cassandra also supports Type 4 UUIDs by using the UUID type. These are just random bits. There is a collision probability, but the collision probability (assuming uncorrelated random number sources, which it will be if you generate in Java) is extremely low - if you created 1 billion a second for 100 years the probability of one collision is about 50%. (See http://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates for more details.)
| Cassandra | 16,084,573 | 35 |
I am having trouble getting Cassandra up and running.
I have downloaded Cassandra 2.0.1 and Python 3.3.2.
Upon starting the CLI for cassandra I get an error:
C:\Dev\ApacheCassandra\apache-cassandra-2.0.1\bin>python cqlsh
File "cqlsh", line 95
except ImportError, e:
^
SyntaxError: invalid syntax
Any suggestions? I am going to downgrade python to 2.7 and see if that fixes my issue.
Thanks!
| The version of Cassandra that you are using is only compatible with Python 2.x.
The following syntax:
except ImportError, e:
was deprecated in Python 2.7 and removed in Python 3.x. Nowadays, you use the as keyword:
except ImportError as e:
This means that you need to either downgrade to Python 2.x or get a version of Cassandra that is compatible with Python 3.x.
| Cassandra | 19,142,231 | 35 |
I'm working on a project that is considering using Cassandra as a database. We would like to eventually migrate to Cassandra even if we use MySQL to start with, given its scalability. I know that big companies like Facebook, Digg, and recently Twitter is using Cassandra, but I don't believe any of those sites run off Rails. My question is whether or not it's feasible to use Cassandra using Ruby on Rails. Points to consider:
We heavily rely on the Authlogic gem. Would switching to Cassandra affect how it works?
Are there any mature ruby clients for Cassandra? Looking on Github it seems that fauna's client (now twitters's client) is the most mature. Has anyone had production experience with it?
Appreciate any tips.
| Twitter is running rails on most of their front end. Fauna's client is actually built and released by twitter, so you can be pretty certain that it's up to date and stable on large workloads. Looking at the history of commits shows that there are frequent improvements being pushed to it, which is great.
Most likely Authlogic would need to be customized to work properly with Cassandra. In particular, it appears to provide certain methods based on named_scope and relational data.
It does appear that someone has built a plugin for DataMapper support in Authlogic: http://twitter.com/collintmiller/statuses/2064046718. You may be able to use that as a starting point for making it compatible with Cassandra.
Good luck!
| Cassandra | 2,382,436 | 34 |
This introduction to Cassandra Replication and Consistency (slides 14-15) boldly asserts:
R+W>N guarantees overlap of read and write quorums.
Please imagine this inequality has huge fangs, dripping with the blood
of innocent, enterprise developers so you can best appreciate the
terror it inspires.
I understand having the sum of the read and write Consistency Levels (R+W) greater than Replication Factor (N) is a good idea... but what's the big deal?
What are the implications, and how does R+W>N compare to the alternatives?
R+W < N
R+W = N
R+W >> N
| The basic problem we are trying to solve is this:
Can a situation occur in which a read doesn't return the most up-to-date value?
Obviously, this is best avoided if possible!
If R+W <= N, then this situation can occur.
A write could send a new value to one group of nodes, while a subsequent read could read from a completely separate group of nodes, and thus miss the new value written.
If R+W > N, then this situation is guaranteed not to occur.
There are N nodes that might hold the value. A write contacts at least W nodes - place a "write" sticker on each of these. A subsequent read contacts at least R nodes - place a "read" sticker on each of these. There are R+W stickers but only N nodes, so at least one node must have both stickers. That is, at least one node participates in both the read and the write, so is able to return the latest write to the read operation.
R+W >> N is impossible.
The maximum number of nodes that you can read from, or write to, is N (the replication factor, by definition). So the most we can have is R = N and W = N, i.e. R+W = 2N. This corresponds to reading and writing at ConsistencyLevel ALL. That is, you just write to all the nodes and read from all the nodes, nothing fancy happens.
| Cassandra | 7,817,513 | 34 |
I am doing this to connect cassandra.But my code is returning an error.. Here is my code
public class CassandraConnection {
public static void main(String[] args) {
String serverIp = "166.78.10.41";
String keyspace = "gamma";
CassandraConnection connection;
Cluster cluster = Cluster.builder()
.addContactPoints(serverIp)
.build();
Session session = cluster.connect(keyspace);
String cqlStatement = "SELECT * FROM TestCF";
for (Row row : session.execute(cqlStatement)) {
System.out.println(row.toString());
}
}
}
this is the error log ..
Failed to execute goal on project CassandraConnection: Could not resolve dependencies for project com.mycompany:CassandraConnection:jar:1.0-SNAPSHOT: The following artifacts could not be resolved: org.specs2:scalaz-effect_2.11.0-SNAPSHOT:jar:7.0.1-SNAPSHOT, org.scalaz:scalaz-effect_2.9.3:jar:7.1.0-SNAPSHOT: Could not find artifact org.specs2:scalaz-effect_2.11.0-SNAPSHOT:jar:7.0.1-SNAPSHOT -> [Help 1]
To see the full stack trace of the errors, re-run Maven with the -e switch.
Re-run Maven using the -X switch to enable full debug logging.
For more information about the errors and possible solutions, please read the following articles: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
| Did you do any research on the matter?
Picking a driver
You need a way to communicate with cassandra, best option is to use a high level API. You have a wide range of choices here but when we look at it from a high level prespective there are really two choices.
CQL based drivers - Higher level abstraction of what thrift does. Also the newer tool, companies providing support / documentation for cassandra recommend that new cassandra applications are CQL based.
Thrift based drives - Have access to low level storage, so its easier to get things wrong.
I'll use datastax's CQL driver.
Download and build the driver from datastax's github repo OR use maven and add the following dependencies:
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-mapping</artifactId>
<version>2.1.2</version>
</dependency>
Picking maven is a good idea, as it will manage all your dependencies for you, but if you dont use maven, at least you will learn about managing jars and reading through stack-traces.
Code
The driver's documentation is coming along nicely. If you get stuck read through it, the documentation contains lots of examples.
I'll use the following two variables throughout the examples.
String serverIP = "127.0.0.1";
String keyspace = "system";
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect(keyspace);
// you are now connected to the cluster, congrats!
Read
String cqlStatement = "SELECT * FROM local";
for (Row row : session.execute(cqlStatement)) {
System.out.println(row.toString());
}
Create/Update/Delete
// for all three it works the same way (as a note the 'system' keyspace cant
// be modified by users so below im using a keyspace name 'exampkeyspace' and
// a table (or columnfamily) called users
String cqlStatementC = "INSERT INTO exampkeyspace.users (username, password) " +
"VALUES ('Serenity', 'fa3dfQefx')";
String cqlStatementU = "UPDATE exampkeyspace.users " +
"SET password = 'zzaEcvAf32hla'," +
"WHERE username = 'Serenity';";
String cqlStatementD = "DELETE FROM exampkeyspace.users " +
"WHERE username = 'Serenity';";
session.execute(cqlStatementC); // interchangeable, put any of the statements u wish.
Other useful code
Creating a Keyspace
String cqlStatement = "CREATE KEYSPACE exampkeyspace WITH " +
"replication = {'class':'SimpleStrategy','replication_factor':1}";
session.execute(cqlStatement);
Creating a ColumnFamily (aka table)
// based on the above keyspace, we would change the cluster and session as follows:
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect("exampkeyspace");
String cqlStatement = "CREATE TABLE users (" +
" username varchar PRIMARY KEY," +
" password varchar " +
");";
session.execute(cqlStatement);
| Cassandra | 16,870,502 | 34 |
Very recently I installed JDK 9 and Apache Cassandra from the official site. But now when I start cassandra in foreground, I get this message:
apache-cassandra-3.11.1/bin$ ./cassandra -f
[0.000s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/home/mmatak/monero/apache-cassandra-3.11.1/logs/gc.log instead.
intx ThreadPriorityPolicy=42 is outside the allowed range [ 0 ... 1 ]
Improperly specified VM option 'ThreadPriorityPolicy=42'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
So far I didn't find any solution for this. Is it maybe possible that Java 9 and Cassandra are not yet compatible? Here is that problem mentioned as well - #CASSANDRA-13107
But I am not sure how to just "remove the flag"? Where is it possible to override or remove this flag?
| I had exactly the same issue:
Can't start Cassandra (Single-Node Cluster on CentOS7)
If it is an option for you, using Java 8, instead of 9, is the simplest way to solve the issue.
| Cassandra | 46,936,170 | 34 |
I'm in a situation where I need to change the the composite primary key as follows:
Old Primary Key: (id, source, attribute_name, updated_at);
New Primary Key I want: (source, id, attribute_name, updated_at);
I issued the following (mysql like) command:
ALTER TABLE general_trend_table
DROP PRIMARY KEY,
ADD PRIMARY KEY(source, id, attribute_name, updated_at);
I got the following error:
Bad Request: line 1:38 no viable alternative at input 'PRIMARY'
any idea how to get around this problem? more specifically I want to know is there any way to change the primary key in cassandra?
| There is no way to change a primary key, as it defines how your data is physically stored.
You can create a new table with the new primary key, copy data from the old one, and then drop the old table.
| Cassandra | 22,139,747 | 32 |
I've read quite a few articles and a lot of question/answers on SO about Cassandra but I still can't figure out how Cassandra decides which node(s) to go to when it's reading the data.
First, some assumptions about an imaginary cluster:
Replication Strategy = simple
Using Random Partitioner
Cluster of 10 nodes
Replication Factor of 5
Here's my understanding of how writes work based on various Datastax articles and other blog posts I've read:
Client sends the data to a random node
The "random" node is decided based on the MD5 hash of the primary key.
Data is written to the commit_log and memtable and then propagated 4 times (with RF = 5).
The 4 next nodes in the ring are then selected and data is persisted in them.
So far, so good.
Now the question is, when the client sends a read request (say with CL = 3) to the cluster, how does Cassandra know which nodes (5 out of 10 as the worst case scenario) it needs to contact to get this data? Surely it's not going to all 10 nodes as that would be inefficient.
Am I correct in assuming that Cassandra will again, do an MD5 hash of the primary key (of the request) and choose the node according to that and then walks the ring?
Also, how does the network topology case work? if I have multiple data centers, how does Cassandra know which nodes in each DC/Rack contain the data? From what I understand, only the first node is obvious (since the hash of the primary key has resulted in that node explicitly).
Sorry if the question is not very clear and please add a comment if you need more details about my question.
Many thanks,
|
Client sends the data to a random node
It might seem that way, but there is actually a non-random way that your driver picks a node to talk to. This node is called a "coordinator node" and is typically chosen based-on having the least (closest) "network distance." Client requests can really be sent to any node, and at first they will be sent to the nodes which your driver knows about. But once it connects and understands the topology of your cluster, it may change to a "closer" coordinator.
The nodes in your cluster exchange topology information with each other using the Gossip Protocol. The gossiper runs every second, and ensures that all nodes are kept current with data from whichever Snitch you have configured. The snitch keeps track of which data centers and racks each node belongs to.
In this way, the coordinator node also has data about which nodes are responsible for each token range. You can see this information by running a nodetool ring from the command line. Although if you are using vnodes, that will be trickier to ascertain, as data on all 256 (default) virtual nodes will quickly flash by on the screen.
So let's say that I have a table that I'm using to keep track of ship crew members by their first name, and let's assume that I want to look-up Malcolm Reynolds. Running this query:
SELECT token(firstname),firstname, id, lastname
FROM usersbyfirstname WHERE firstname='Mal';
...returns this row:
token(firstname) | firstname | id | lastname
----------------------+-----------+----+-----------
4016264465811926804 | Mal | 2 | Reynolds
By running a nodetool ring I can see which node is responsible for this token:
192.168.1.22 rack1 Up Normal 348.31 KB 3976595151390728557
192.168.1.22 rack1 Up Normal 348.31 KB 4142666302960897745
Or even easier, I can use nodetool getendpoints to see this data:
$ nodetool getendpoints stackoverflow usersbyfirstname Mal
Picked up JAVA_TOOL_OPTIONS: -javaagent:/usr/share/java/jayatanaag.jar
192.168.1.22
For more information, check out some of the items linked above, or try running nodetool gossipinfo.
| Cassandra | 31,669,991 | 32 |
I have read in the latest release that super columns are not desirable due to "performance issues", but no where is this explained.
Then I read articles such as this one that give wonderful indexing patterns using super columns.
This leave me with no idea of what is currently the best way to do indexing in Cassandra.
What are the performance issues of super columns?
Where can I find current best practices for indexing?
| Super columns suffer from a number of problems, not least of which is that it is necessary for Cassandra to deserialze all of the sub-columns of a super column when querying (even if the result will only return a small subset). As a result, there is a practical limit to the number of sub-columns per super column that can be stored before performance suffers.
In theory, this could be fixed within Cassandra by properly indexing sub-columns, but consensus is that composite columns are a better solution, and they work without the added complexity.
The easiest way to make use of composite columns is to take advantage of the abstraction that CQL 3 provides. Consider the following schema:
CREATE TABLE messages(
username text,
sent_at timestamp,
message text,
sender text,
PRIMARY KEY(username, sent_at)
);
Username here is the row key, but we've used a PRIMARY KEY definition which creates a grouping of row key and the sent_at column. This is important as it has the effect of indexing that attribute.
INSERT INTO messages (username, sent_at, message, sender) VALUES ('bob', '2012-08-01 11:42:15', 'Hi', 'alice');
INSERT INTO messages (username, sent_at, message, sender) VALUES ('alice', '2012-08-01 11:42:37', 'Hi yourself', 'bob');
INSERT INTO messages (username, sent_at, message, sender) VALUES ('bob', '2012-08-01 11:43:00', 'What are you doing later?', 'alice');
INSERT INTO messages (username, sent_at, message, sender) VALUES ('bob', '2012-08-01 11:47:14', 'Bob?', 'alice');
Behind the scenes Cassandra will store the above inserted data something like this:
alice: (2012-08-01 11:42:37,message): Hi yourself, (2012-08-01 11:42:37,sender): bob
bob: (2012-08-01 11:42:15,message): Hi, (2012-08-01 11:42:15,sender): alice, (2012-08-01 11:43:00,message): What are you doing later?, (2012-08-01 11:43:00,sender): alice (2012-08-01 11:47:14,message): Bob?, (2012-08-01 11:47:14,sender): alice
But using CQL 3, we can query the "row" using a sent_at predicate, and get back a tabular result set.
SELECT * FROM messages WHERE username = 'bob' AND sent_at > '2012-08-01';
username | sent_at | message | sender
----------+--------------------------+---------------------------+--------
bob | 2012-08-01 11:43:00+0000 | What are you doing later? | alice
bob | 2012-08-01 11:47:14+0000 | Bob? | alice
| Cassandra | 11,915,255 | 31 |
During the creation of a table, like in SQL we have DEFAULT to insert a defualt value for a column.
type tinyint(1) NOT NULL DEFAULT '0'
Is there anything to achieve the same in Cassandra .
| No, there is no default value in Cassandra. Further more, setting a column value to null means removing it
| Cassandra | 19,155,691 | 31 |
I have a Java client that pushes (INSERT) records in batch to Cassandra cluster. The elements in the batch all have the same row key, so they all will be placed in the same node. Also I don't need the transaction to be atomic so I've been using unlogged batch.
The number of INSERT commands in each batch depends on different factors, but can be anything between 5 to 50000. First I just put as many commands as I had in one batch and submitted it. This threw com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large. Then I used a cap of 1000 INSERT per batch, and then down to 300. I noticed I'm just randomly guessing without knowing exactly where this limit comes from, which can cause trouble down the road.
My question is, what is this limit? Can I modify it? How can I know how many elements can be placed in a batch? When my batch is "full"?
| I would recommend not increasing the cap, and just splitting into multiple requests. Putting everything in a giant single request will negatively impact the coordinator significantly. Having everything in one partition can improve the throughput in some sized batches by reducing some latency, but batches are never meant to be used to improve performance. So trying to optimize to get maximum throughput by using different batch sizes will depend largely on use case/schema/nodes and will require specific testing, since there's generally a cliff on the size where it starts to degrade.
There is a
# Fail any batch exceeding this value. 50kb (10x warn threshold) by default.
batch_size_fail_threshold_in_kb: 50
option in your cassandra.yaml to increase it, but be sure to test to make sure your actually helping and not hurting your throughput.
| Cassandra | 34,699,841 | 31 |
I'm trying to rename table created via CQLSH.
E.g. rename table "AAA" to "BBB". Can't find any command to do so. Any ideas?
Using [cqlsh 3.1.6 | Cassandra 1.2.8 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
| I don't believe you can rename tables or keyspaces, there's no CQL3 operation to do it, and nothing in the old Thirft interfaces either, if I remember correctly.
One reason why you can't is that it would be an extremely hard thing for Cassandra to do due to its distributed nature, the change can't be done atomically so the cluster would be in an inconsistent state, and most likely updates would be lost. It's similar to creating and dropping tables, but in those cases it's expected that updates will be lost if they're issued before the table is created or after it has been dropped.
The only way that I know of to do what you ask is to create the new table and move all the data from the old to the new, then drop the old table. There might be a way to do it without moving the data, but it would probably require you to stop the cluster and change the name of all directories and files belonging to the table, and also change the metadata in the system.schema_columnfamilies table (but I'm not sure you can even do that).
| Cassandra | 18,112,384 | 30 |
It could be kind of lame but in cassandra has the primary key to be unique?
For example in the following table:
CREATE TABLE users (
name text,
surname text,
age int,
adress text,
PRIMARY KEY(name, surname)
);
So if is it possible in my database to have 2 persons in my database with the same name and surname but different ages? Which means same primary key..
| Yes the primary key has to be unique. Otherwise there would be no way to know which row to return when you query with a duplicate key.
In your case you can have 2 rows with the same name or with the same surname but not both.
| Cassandra | 21,762,599 | 30 |
Can someone explain the differences between --packages and --jars in a spark-submit script?
nohup ./bin/spark-submit --jars ./xxx/extrajars/stanford-corenlp-3.8.0.jar,./xxx/extrajars/stanford-parser-3.8.0.jar \
--packages datastax:spark-cassandra-connector_2.11:2.0.7 \
--class xxx.mlserver.Application \
--conf spark.cassandra.connection.host=192.168.0.33 \
--conf spark.cores.max=4 \
--master spark://192.168.0.141:7077 ./xxx/xxxanalysis-mlserver-0.1.0.jar 1000 > ./logs/nohup.out &
Also, do I require the--packages configuration if the dependency is in my applications pom.xml? (I ask because I just blew up my applicationon by changing the version in --packages while forgetting to change it in the pom.xml)
I am using the --jars currently because the jars are massive (over 100GB) and thus slow down the shaded jar compilation. I admit I am not sure why I am using --packages other than because I am following datastax documentation
| if you do spark-submit --help it will show:
--jars JARS Comma-separated list of jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
if it is --jars
then spark doesn't hit maven but it will search specified jar in the local file system it also supports following URL scheme hdfs/http/https/ftp.
so if it is --packages
then spark will search specific package in local maven repo then central maven repo or any repo provided by --repositories and then download it.
Now Coming back to your questions:
Also, do I require the--packages configuration if the dependency is in my applications pom.xml?
Ans: No, If you are not importing/using classes in jar directly but need to load classes by some class loader or service loader (e.g. JDBC Drivers). Yes otherwise.
BTW, If you are using specific version of specific jar in your pom.xml then why dont you make uber/fat jar of your application or provide dependency jar in --jars argument ? instead of using --packages
links to refer:
spark advanced-dependency-management
add-jars-to-a-spark-job-spark-submit
| Cassandra | 51,434,808 | 30 |
I download apache-cassandra-0.8.5 for ubuntu and extract it.I read the readme file.
I try bellow command in shell:
bin/cassandra -f
But it said:
Error: Exception thrown by the agent : java.net.MalformedURLException: Local host name
unknown: java.net.UnknownHostException: node24.nise.local: node24.nise.local
what should I do?
| What does your /etc/hosts file contain? Maybe add localhost and node24.nise.local as a name at the 127.0.0.1 line.
| Cassandra | 7,496,640 | 28 |
I'm undecided whether it's better, performance-wise, to use a very commonly shared column value (like Country) as partition key for a compound primary key or a rather unique column value (like Last_Name).
Looking at Cassandra 1.2's documentation about indexes I get this:
"When to use an index:
Cassandra's built-in indexes are best on a table
having many rows that contain the indexed value. The more unique
values that exist in a particular column, the more overhead you will
have, on average, to query and maintain the index. For example,
suppose you had a user table with a billion users and wanted to look
up users by the state they lived in. Many users will share the same
column value for state (such as CA, NY, TX, etc.). This would be a
good candidate for an index."
"When not to use an index:
Do not use an index to query a huge volume of records for a small
number of results. For example, if you create an index on a column
that has many distinct values, a query between the fields will incur
many seeks for very few results. In the table with a billion users,
looking up users by their email address (a value that is typically
unique for each user) instead of by their state, is likely to be very
inefficient. It would probably be more efficient to manually maintain
the table as a form of an index instead of using the Cassandra
built-in index. For columns containing unique data, it is sometimes
fine performance-wise to use an index for convenience, as long as the
query volume to the table having an indexed column is moderate and not
under constant load."
Looking at the examples from CQL's SELECT for
"Querying compound primary keys and sorting results", I see something like a UUID being used as partition key... which would indicate that it's preferable to use something rather unique?
| Indexing in the documentation you wrote up refers to secondary indexes. In cassandra there is a difference between the primary and secondary indexes. For a secondary index it would indeed be bad to have very unique values, however for the components in a primary key this depends on what component we are focusing on. In the primary key we have these components:
PRIMARY KEY(partitioning key, clustering key_1 ... clustering key_n)
The partitioning key is used to distribute data across different nodes, and if you want your nodes to be balanced (i.e. well distributed data across each node) then you want your partitioning key to be as random as possible. That is why the example you have uses UUIDs.
The clustering key is used for ordering so that querying columns with a particular clustering key can be more efficient. That is where you want your values to not be unique and where there would be a performance hit if unique rows were frequent.
The cql docs have a good explanation of what is going on.
| Cassandra | 18,168,379 | 28 |
I am trying to use Spark Cassandra Connector in Spark 1.1.0.
I have successfully built the jar file from the master branch on GitHub and have gotten the included demos to work. However, when I try to load the jar files into the spark-shell I can't import any of the classes from the com.datastax.spark.connector package.
I have tried using the --jars option on spark-shell and adding the directory with the jar file to Java's CLASSPATH. Neither of these options work. In fact, when I use the --jars option, the logging output shows that the Datastax jar is getting loaded, but I still cannot import anything from com.datastax.
I have been able to load the Tuplejump Calliope Cassandra connector into the spark-shell using --jars, so I know that's working. It's just the Datastax connector which is failing for me.
| I got it. Below is what I did:
$ git clone https://github.com/datastax/spark-cassandra-connector.git
$ cd spark-cassandra-connector
$ sbt/sbt assembly
$ $SPARK_HOME/bin/spark-shell --jars ~/spark-cassandra-connector/spark-cassandra-connector/target/scala-2.10/connector-assembly-1.2.0-SNAPSHOT.jar
In scala prompt,
scala> sc.stop
scala> import com.datastax.spark.connector._
scala> import org.apache.spark.SparkContext
scala> import org.apache.spark.SparkContext._
scala> import org.apache.spark.SparkConf
scala> val conf = new SparkConf(true).set("spark.cassandra.connection.host", "my cassandra host")
scala> val sc = new SparkContext("spark://spark host:7077", "test", conf)
| Cassandra | 25,837,436 | 28 |
I'm doing a student project involving building and querying a Cassandra data cluster.
When my cluster load was light ( around 30GB ) my queries ran without a problem, but now that it's quite a bit bigger (1/2TB) my queries are timing out.
I thought that this problem might arise, so before I began generating and loading test data I had changed this value in my cassandra.yaml file:
request_timeout_in_ms
(Default: 10000 ) The default timeout for other, miscellaneous operations.
However, when I changed that value to like 1000000, then cassandra seemingly hung on startup -- but that could've just been the large timeout at work.
My goal for data generation is 2TB. How do I query that large of space without running into timeouts?
queries :
SELECT huntpilotdn
FROM project.t1
WHERE (currentroutingreason, orignodeid, origspan,
origvideocap_bandwidth, datetimeorigination)
> (1,1,1,1,1)
AND (currentroutingreason, orignodeid, origspan,
origvideocap_bandwidth, datetimeorigination)
< (1000,1000,1000,1000,1000)
LIMIT 10000
ALLOW FILTERING;
SELECT destcause_location, destipaddr
FROM project.t2
WHERE datetimeorigination = 110
AND num >= 11612484378506
AND num <= 45880092667983
LIMIT 10000;
SELECT origdevicename, duration
FROM project.t3
WHERE destdevicename IN ('a','f', 'g')
LIMIT 10000
ALLOW FILTERING;
I have a demo keyspace with the same schemas, but a far smaller data size (~10GB) and these queries run just fine in that keyspace.
All these tables that are queried have millions of rows and around 30 columns in each row.
| If you are using Datastax cqlsh then you can specify client timeout seconds as a command line argument. The default is 10.
$ cqlsh --request-timeout=3600
Datastax Documentation
| Cassandra | 29,437,517 | 28 |
I am using Cassandra for the first time in a web app and I got a query problem.
Here is my tab :
CREATE TABLE vote (
doodle_id uuid,
user_id uuid,
schedule_id uuid,
vote int,
PRIMARY KEY ((doodle_id), user_id, schedule_id)
);
On every request, I indicate my partition key, doodle_id.
For example I can make without any problems :
select * from vote where doodle_id = c4778a27-f2ca-4c96-8669-15dcbd5d34a7 and user_id = 97a7378a-e1bb-4586-ada1-177016405142;
But on the last request I made :
select * from vote where doodle_id = c4778a27-f2ca-4c96-8669-15dcbd5d34a7 and schedule_id = c37df0ad-f61d-463e-bdcc-a97586bea633;
I got the following error :
Bad Request: PRIMARY KEY column "schedule_id" cannot be restricted (preceding column "user_id" is either not restricted or by a non-EQ relation)
I'm new with Cassandra, but correct me if I'm wrong, in a composite primary key, the first part is the PARTITION KEY which is mandatory to allow Cassandra to know where to look for data.
Then the others parts are CLUSTERING KEY to sort data.
But I still don't get why my first request is working and not the second one ?
If anyone could help it will be a great pleasure.
| In Cassandra, you should design your data model to suit your queries. Therefore the proper way to support your second query (queries by doodle_id and schedule_id, but not necessarilly with user_id), is to create a new table to handle that specific query. This table will be pretty much the same, except the PRIMARY KEY will be slightly different:
CREATE TABLE votebydoodleandschedule (
doodle_id uuid,
user_id uuid,
schedule_id uuid,
vote int,
PRIMARY KEY ((doodle_id), schedule_id, user_id)
);
Now this query will work:
SELECT * FROM votebydoodleandschedule
WHERE doodle_id = c4778a27-f2ca-4c96-8669-15dcbd5d34a7
AND schedule_id = c37df0ad-f61d-463e-bdcc-a97586bea633;
This gets you around having to specify ALLOW FILTERING. Relying on ALLOW FILTERING is never a good idea, and is certainly not something that you should do in a production cluster.
| Cassandra | 28,565,470 | 27 |
I've tried all the measures from this post and Cassandra doc.
I've tried running all the versions of Cassandra including the latest release 3.7 from tarball and Debian package, but I keep getting errors when I execute cqlsh.
Error:
Connection error: ('Unable to connect to any servers', {'127.0.0.1': TypeError('ref() does not take keyword arguments',)})
I had no problem running Cassandra before I upgraded my Linux Mint from 17.3 to 18.
I believe I installed all the necessary packages such as java 8 and python 2.7.12.
I think the problem exists in cassandra.yaml file since the default setting isn't working, but I'm not sure how to configure properly to get it running.
Any suggestions appreciated.
| You are running into CASSANDRA-11850, where cqlsh breaks with Python 2.7.11+. This ticket has been marked as "Resolved" and a patch has been applied to Cassandra 3.9 which has not been released yet.
I believe I installed all the necessary packages such as java 8 and python 2.7.12.
In the interim (until 3.9 is released) you can roll back to Python 2.7.10, and cqlsh should work (not trivial). Otherwise, DataStax DevCenter should work with Cassandra 3.7.
Edit 20161020
Cassandra 3.9 was released a few weeks ago, and can now be downloaded.
| Cassandra | 38,616,858 | 27 |
I want to use Docker to start my application and Cassandra database, and I would like to use Docker Compose for that. Unfortunately, Cassandra starts much slower than my application, and since my application eagerly initializes the Cluster object, I get the following exception:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra/172.18.0.2:9042 (com.datastax.driver.core.exceptions.TransportException: [cassandra/172.18.0.2:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1454)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
According to the stacktrace and a little debugging, it seems that Cassandra Java driver does not apply retry policies to the initial startup. This seems kinda weird to me. Is there a way for me to configure the driver so it will continue its attempts to connect to the server until it succeeds?
| You should be able to write some try/catch logic on the NoHostAvailableException to retry the connection after a 5-10 second wait. I would recommend only doing this a few times before throwing the exception after a certain time period where you know that it should have started by that point.
Example pseudocode
Connection makeCassandraConnection(int retryCount) {
Exception lastException = new IllegalStateException();
while (retryCount > 0) {
try {
return doConnectionStuff();
} catch (NoHostAvailableException e) {
lastException = e;
retryCount--;
Thread.sleep(TimeUnit.SECONDS.toMillis(5));
}
}
throw lastException;
}
| Cassandra | 38,723,896 | 27 |
I am wondering because I searched the pdf "[noSql] the definitive guide" and "beginning [noSql]" for the word "inheritance" but I didn't find anything? am I missing something? because I'm doing a tablePerHierarchy inheritance with hibernate and mysql, does that become deprecated for some reason in [noSql]?
(replace [noSql] with the "not only sql" database you like)
| I know this answer is a little late, but for MongoDB, you're probably looking at something slightly different.
Mongo is schemaless, so the concept of "tablePerHierarchy" is not necessarily useful.
Assume the following
class A
property X
property Y
property Z
class B inherits from A
property W
In an RDMS you would probably have something like this
table A: columns X, Y, Z
table B: columns X, Y, Z, W
But MongoDB does not have a schema. So you do not need to structure data in this way. Instead, you would have a "collection" containing all objects (or "documents") of type A or B (or C...).
So your collection would be a series of objects like this:
{"_id":"1", "X":1, "Y":2, "Z":3}
{"_id":"2", "X":5, "Y":6, "Z":7, "W":6}
You'll notice that I'm storing objects of type A right beside objects of type B. MongoDB makes this very easy. Just pull up a Document from the Collection and it "magically" has all of the appropriate fields/properties.
However, if you have "data objects" or "entities", you can make your life easier by adding a type.
{"_id":"1", "type":"A", "X":1, "Y":2, "Z":3}
{"_id":"2", "type":"B", "X":5, "Y":6, "Z":7, "W":6}
This makes it easier to write a factory class for loading your objects.
| Cassandra | 3,040,584 | 26 |
Does Apache Cassandra support sharding?
Apologize that this question must seem trivial, but I cannot seem to find the answer. I have read that Cassandra was partially modeled after GAE's Big Table which shards on a massive scale. But most of the documentation I'm currently finding on Cassandra seems to imply that Cassandra does not partition data horizontally across multiple machines, but rather supports many many duplicate machines. This would imply that Cassandra is a good fit high availability reads, but would eventually break down if the write volume became very very high.
| Cassandra does partition across nodes (because if you can't split it you can't scale it). All of the data for a Cassandra cluster is divided up onto "the ring" and each node on the ring is responsible for one or more key ranges. You have control over the Partitioner (e.g. Random, Ordered) and how many nodes on the ring a key/column should be replicated to based on your requirements.
This contains a pretty good overview. Basic architecture
Also, I highly recommend reading the Dynamo white paper. While Cassandra is different than Dynamo in many ways, conceptually they stem from the same roots. Check it out: Dynamo White Paper
| Cassandra | 16,428,008 | 26 |
I have a table/column family which I am inserting rows that expire after a certain amount of time. Is it possible to then query the table to check which rows are going to expire soon (for diagnostic purposes, ie something like this:
select subject, ?ttl? from discussions;
| You can do
select subject, TTL(subject) from discussions;
to return the remaining TTL in seconds for subject.
E.g.
> insert into discussions (uid, subject) VALUES (now(), 'hello') using ttl 100;
> select subject, TTL(subject) from discussions;
subject | ttl(subject)
---------+--------------
hello | 84
since I waited 16 seconds before running the select.
| Cassandra | 19,067,220 | 26 |
According to cassandra's log (see below) queries are getting aborted due to too many tombstones being present. This is happening because once a week I cleanup (delete) rows with a counter that is too low. This 'deletes' hundreds of thousands of rows (marks them as such with a tombstone.)
It is not at all a problem if, in this table, a deleted row re-appears because a node was down during the cleanup process, so I set the gc grace time for the single affected table to 10 hours (down from default 10 days) so the tombstoned rows can get permanently deleted relatively fast.
Regardless, I had to set the tombstone_failure_threshold extremely high to avoid the below exception. (one hundred million, up from one hundred thousand.) My question is, is this necessary? I have absolutely no idea what type of queries get aborted; inserts, selects, deletes?
If it's merely some selects being aborted, it's not that big a deal. But that's assuming abort means 'capped' in that the query stops prematurely and returns whatever live data it managed to gather before too many tombstones were found.
Well, to ask it simpler; what happens when the tombstone_failure_threshold is exceeded?
INFO [HintedHandoff:36] 2014-02-12 17:44:22,355 HintedHandOffManager.java (line 323) Started hinted handoff for host: fb04ad4c-xxxx-4516-8569-xxxxxxxxx with IP: /XX.XX.XXX.XX
ERROR [HintedHandoff:36] 2014-02-12 17:44:22,667 SliceQueryFilter.java (line 200) Scanned over 100000 tombstones; query aborted (see tombstone_fail_threshold)
ERROR [HintedHandoff:36] 2014-02-12 17:44:22,668 CassandraDaemon.java (line 187) Exception in thread Thread[HintedHandoff:36,1,main]
org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1516)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1335)
at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:92)
at org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Forgot to mention; running Cassandra version 2.0.4
| When a query that returns a range of rows (or columns) is issued to Cassandra, it has to scan the table to collect the result set (this is called a slice). Now, deleted data is stored in the same manner as regular data, except that it's marked as tombstoned until compacted away. But the table reader has to scan through it nevertheless. So if you have tons of tombstones lying around, you will have an arbitrarily large amount of work to do to satisfy your ostensibly limited slice.
A concrete example: let's say you have two rows with clustering keys 1 and 3, and a hundred thousand dead rows with clustering key 2 that are located in between rows 1 and 3 in the table. Now when you issue a SELECT query where the key is to be >= 1 and < 3, you'll have to scan 100002 rows, instead of the expected two.
To make it worse, Cassandra doesn't just scan through these rows, but also has to accumulate them in memory while it prepares the response. This can cause an out-of-memory error on the node if things go too far out, and if multiple nodes are servicing the request, it may even cause a multiple failure bringing down the whole cluster. To prevent this from happening, the service aborts the query if it detects a dangerous number of tombstones. You're free to crank this up, but it's risky, if your Cassandra heap is close to running out during these spikes.
This exception was introduced in a recent fix, first available in 2.0.2. Here is the bug entry describing the problem the change was trying to address. Previously everything would have been just fine, until one of your nodes, or potentially several, suddenly crashed.
If it's merely some selects being aborted, it's not that big a deal.
But that's assuming abort means 'capped' in that the query stops
prematurely and returns whatever live data it managed to gather before
too many tombstones were found.
The query doesn't return a limited set, it actually drops the request completely. If you'd like to mitigate, maybe it's worth doing your bulk row deletion at the same cadence as the grace period, so you don't have this huge influx of tombstones every week.
| Cassandra | 21,755,286 | 26 |
I'm trying to understand the difference between these two and the scenarios in which you would prefer to use one over the other.
My specific use case is using cassandra as an event ingestion system backed by an analytics engine that interprets the event.
My model includes
event id (the partition key)
event time (a clustering column)
event type (i'm not sure whether to use clustering column or secondary index)
I figure the most common read scenario will be to get the events over a time range hence event time is the clustering column. A less frequent read scenario might involve further filtering the event query by event type.
| A secondary index is pretty similar to what we know from regular relational databases. If you have a query with a where clause that uses column values that are not part of the primary key, lookup would be slow because a full row search has to be performed. Secondary indexes make it possible to service such queries efficiently. Secondary indexes are stored as extra tables, and just store extra data to make it easy to find your way in the main table.
So that's a good ol' index, which we already know about. So far, there's nothing new to cassandra and its distributed nature.
Partitioning and clustering is all about deciding how rows from the main table are spread among the nodes. This is unique to cassandara since it determines the distribution of data. So, the primary key consists of at least one column. The first column in the primary key is used as the partition key. The partition key is used to decide which node to store a row. If the primary key has additional columns, the columns are used to cluster the data on a given node - the data is stored in lexicographic order on a node by clustering columns.
This question has more specifics on clustering columns: Clustering Keys in Cassandra
So an index on a given column X makes the lookup X --> primary key efficient. The partition key (first column in the primary key) determines which node a row is stored on. Clustering columns (additional columns in the primary key) determine which order rows are stored in on their assigned node.
So your intuition sounds about right - the event ID is presumably guaranteed unique, so is great for building a primary key. Event time is a great way to order rows on disk on a given node.
If you never needed to lookup data by event type, eg, never had a query like SELECT * FROM Events WHERE Type = Warning, then you have no need for your additional indexes, but your demands for partitioning don't change. Indexes make it easy to serve queries with different predicates. Since you mentioned that you indeed were planning on performing queries like that, you do in fact likely want an index on your EventType column.
Check out the cassandra documentation: http://www.datastax.com/documentation/cql/3.0/cql/ddl/ddl_compound_keys_c.html
Cassandra uses the first column name in the primary key definition as the partition key.
...
In the case of the playlists table, the song_order is the clustering column. The data for each partition is clustered by the remaining column or columns of the primary key definition. On a physical node, when rows for a partition key are stored in order based on the clustering columns
| Cassandra | 24,622,511 | 26 |
Columnar database should store group of columns together. But Cassandra stores data row-wise.
SS Table will hold multiple rows of data mapped to their corresponding partition key. So I feel like Cassandra is a row wise data store like MySQL but has other benefits like "wide rows" and every columns are not necessarily to be present for all the rows and of course it's in memory . Please correct me if I'm wrong.
| If you go to the Apache Cassandra project on GitHub, and scroll down to the "Executive Summary," you will get your answer:
Cassandra is a partitioned row store. Rows are organized into tables
with a required primary key.
Partitioning means that Cassandra can distribute your data across
multiple machines in an application-transparent matter. Cassandra will
automatically repartition as machines are added and removed from the
cluster.
Row store means that like relational databases, Cassandra organizes
data by rows and columns.
"So I feel like Cassandra is a row wise data store"
And that would be correct.
| Cassandra | 25,441,921 | 26 |
I want to khnow what is the size of a table in Cassandra.
Is there a query in cqlsh similar to show dbs in mongodb?
If not is there another way to get the size of tables?
| nodetool cfstats -- <keyspace>.<table>
will give you the 'Space used' by a table.
| Cassandra | 27,702,883 | 26 |
I am trying to configure spring data with cassandra.
But I am getting bellow error , when my app is deploying in tomcat.
When I check the connection, it is available to the given port. (127.0.0.1:9042). I have include stack trace and spring configuration bellow.
Does anyone having idea on this error?
Full stack trace :
2015-12-06 17:46:25 ERROR web.context.ContextLoader:331 - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession': Invocation of init method failed; nested exception is com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1572)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:736)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:759)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:434)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:106)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4994)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5492)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1245)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1895)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1230)
at com.datastax.driver.core.Cluster.init(Cluster.java:157)
at com.datastax.driver.core.Cluster.connect(Cluster.java:245)
at com.datastax.driver.core.Cluster.connect(Cluster.java:278)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.afterPropertiesSet(CassandraCqlSessionFactoryBean.java:82)
at org.springframework.data.cassandra.config.CassandraSessionFactoryBean.afterPropertiesSet(CassandraSessionFactoryBean.java:43)
===================================================================
Spring Configuration :
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans ...>
<cassandra:cluster id="cassandraCluster"
contact-points="127.0.0.1" port="9042" />
<cassandra:converter />
<cassandra:session id="cassandraSession" cluster-ref="cassandraCluster"
keyspace-name="blood" />
<cassandra:template id="cqlTemplate" />
<cassandra:repositories base-package="com.blood.dao.nosql" />
<cassandra:mapping entity-base-packages="com.blood.domain.nosql" />
</beans:beans>
| The problem is that Spring Data Cassandra (as of December 2015 when I write this) does not provide support for Cassandra 3.x. Here's an excerpt from a conversation with one of the developers in the #spring channel on freenode:
[13:49] <_amicable> Hi all, does anybody know if spring data cassandra supports cassandra 3.x? All dependencies & datastax drivers seem to be 2.x
[13:49] <@_ollie> amicable: Not in the near future.
[13:49] <_amicable> _ollie: thanks.
[13:50] <_amicable> I'll go and look at the relative merits of 2.x vs 3.x then
;)
[13:51] <@_ollie> SD Cassandra is a community project (so far) and its progress highly depends on how much time the developers can actually spend on it.
[13:51] <@_ollie> We will have someone joining the team in February 2016 to get the project more closely aligned to the core Spring Data projects.
| Cassandra | 34,117,374 | 26 |
I have huge database (kinda wordnet) and want to know if it's easier to use Cassandra instead of MySQL|PostrgreSQL
All my life I was using MySQL and PostrgreSQL and I could easily think in terms of relational algebra, but several weeks ago I learned about Cassandra and that it's used in Facebook and Twitter.
Is it more convenient?
What DBMS are usually used nowadays to store social net's data, relationships between objects, wordnet?
| There is nothing like a Silver bullet solution, everything is built to solve specific problem and has its own pros and cons. It is up to you to decide - what problem statement you have and what is best solution that fits your problem. Whether you use Cassandra (NoSQL) or MySQL(RDBMS), it is all driven from your system's requirements. Below are the inputs that will help you in taking better decision while deciding on database.
Why to Use NoSQL
In the case of RDBMS database, making choice is quite easy because almost all the databases like MySQL, Oracle, MS SQL, PostgreSQL in this category offer almost same kind of solutions oriented to the ACID property. When it comes to NoSQL, decision becomes difficult because every NoSQL database offers different solution and you have to understand which one is best suited for your app/system requirement. For example, MongoDB fits for use cases where your system demands schema-less document store. HBase might fit for Search engines, analysing log data, any place where scanning huge, two-dimensional join-less tables is a requirement. Redis is built to provide In-Memory search for varieties of data structures like tree, queue, link list etc and can be good fit for making real time leader board, pub-sub kind of system. Similarly there are other database in this category (including Cassandra) which fits for different problems. Now lets move to original question, and answer them one by one.
When to use Cassandra
Being a part of NoSQL family, Cassandra offers solution for problem where your requirement is to have very heavy write system and you want to have quite responsive reporting system on top of that stored data. Consider use case of Web analytics where log data is stored for each request and you want to built analytical platform around it to count hits by hour, by browser, by IP, etc in real time manner. You can refer to blog post (http://blogs.shephertz.com/2015/04/22/why-cassandra-excellent-choice-for-realtime-analytics-workload/) to understand more about the use cases where Cassandra fits in.
When to Use a RDMS instead of Cassandra/NoSQL
Cassandra is based on NoSQL database and does not provide ACID and relational data property. If you have strong requirement of ACID property (for example Financial data), Cassandra would not be a fit in that case. Obviously, you can make work out of it, however you will end up writing lots of application code to handle ACID property and will loose on time to market badly. Also managing that kind of system with Cassandra would be complex and tedious for you.
| Cassandra | 2,529,871 | 25 |
I have been hearing a lot about document oriented data stores like CouchDB. I understand the uses of BigTable like stores such as Cassandra. After reading this question, I was wondering what the conditions would be to merit using a document store?
| Column-family stores such as Bigtable and Cassandra have very limited querying capabilities. The application is responsible for maintaining indexes in order to query a more complex data model.
Document databases allow you to query the content, not just the key. It will also manage the indexes for you, reducing the complexity of your application.
Domain-driven design evangelizes the use of aggregates and value objects. As Ayende points out, (complex) aggregates are very natural candidates to be stored as a single document, instead of normalizing them over multiple tables or column families. This will reduce the complexity of your persistence layer. There's also less chance that related data is scattered across multiple nodes, as all the data is contained in a single document.
If your application needs to store polymorphic objects, document databases are also a good candidate. Of course, this could also be stored in Cassandra, but you won't have as much querying capabilities. At least not out of the box.
Think of a document database as a luxurious sports car. It doesn't need a professional driver (read: complex application) to get you from A to B, it has features such as air conditioning and comfortable seats and it will lap the high-scalability track in an acceptable time. However, if you want to set a lap record on the high-scalability track, you will need a professional driver and a highly optimized car (e.g. Cassandra), which lacks features such as air conditioning.
| Cassandra | 3,376,636 | 25 |
We are using Cassandra as the data historian for our fleet management solution. We have a table in Cassandra , which stores the details of journey made by the vehicle. The table structure is as given below
CREATE TABLE journeydetails(
bucketid text,
vehicleid text,
starttime timestamp,
stoptime timestamp,
travelduration bigint,
PRIMARY KEY (bucketid,vehicleid,starttime,travelduration)
);
Where:
bucketid :- partition key which is a combination of month and year
vehicleid : -unique id of the vehicle
starttime :- start time of the journey
endtime :- endtime of the journey
travelduration:- duration of travel in milliseconds
We would like to run the following query - get all the travels of a vehicle - 1234567 between 2015-12-1 and 2015-12-3 whose travel duration is greater than 30 minutes
When I run this query:
select * from journeydetails where bucketid in('2015-12') and vehicleid in('1234567')
and starttime > '2015-12-1 00:00:00' and starttime < '2015-12-3 23:59:59'
and travelduration > 1800000;
I get this result:
InvalidRequest: code=2200 [Invalid query] message="Clustering column "travelduration"
cannot be restricted (preceding column "starttime" is restricted by a non-EQ relation)
Does anyone have a recommendation on how to fix this issue?
| select * from journeydetails where bucketid in('2015-12') and vehicleid in('1234567')
and starttime > '2015-12-1 00:00:00' and starttime < '2015-12-3 23:59:59'
and travelduration > 1800000;
That's not going to work. The reason goes back to how Cassandra stores data on-disk. The idea with Cassandra is that it is very efficient at returning a single row with a precise key, or at returning a continuous range of rows from the disk.
Your rows are partitioned by bucketid, and then sorted on disk by vehicleid, starttime, and travelduration. Because you are already executing a range query (non-EQ relation) on starttime, you cannot restrict the key that follows. This is because the travelduration restriction may disqualify some of the rows in your range condition. This would result in an inefficient, non-continuous read. Cassandra is designed to protect you from writing queries (such as this), which may have unpredictable performance.
Here are two alternatives:
1- If you could restrict all of your key columns prior to travelduration (with an equals relation), then you could apply a your greater-than condition:
select * from journeydetails where bucketid='2015-12' and vehicleid='1234567'
and starttime='2015-12-1 00:00:00' and travelduration > 1800000;
Of course, restricting on an exact starttime may not be terribly useful.
2- Another approach would be to omit travelduration altogether, and then your original query would work.
select * from journeydetails where bucketid='2015-12' and vehicleid='1234567'
and starttime > '2015-12-1 00:00:00' and starttime < '2015-12-3 23:59:59';
Unfortunately, Cassandra does not offer a large degree of query flexibility. Many people have found success using a solution like Spark (alongside Cassandra) to achieve this level of reporting.
And just a side note, but don't use IN unless you have to. Querying with IN is similar to using a secondary index, in that Cassandra has to talk to several nodes to satisfy your query. Calling it with a single item probably isn't too big of a deal. But IN is one of those old RDBMS habits that you should really break before getting too deep into Cassandra.
| Cassandra | 34,422,515 | 25 |
Cassandra newbie question. I'm collecting some data from a social networking site using REST calls. So I end up with the data coming back in JSON format.
The JSON is only one of the columns in my table. I'm trying to figure out what the "best practice" is for storing the JSON string.
First I thought of using the map type, but the JSON contains a mix of strings, numerical types, etc. It doesn't seem like I can declare wildcard types for the map key/value. The JSON string can be quite large, probably over 10KB in size. I could potentially store it as a string, but it seems like that would be inefficient. I would assume this is a common task, so I'm sure there are some general guidelines for how to do this.
I know Cassandra has native support for JSON, but from what I understand, that's mostly used when the entire JSON map matches 1-1 with the database schema. That's not the case for me. The schema has a bunch of columns and the JSON string is just a sort of "payload". Is it better to store the JSON string as a blob or as text? BTW, the Cassandra version is 2.1.5.
Any hints appreciated. Thanks in advance.
| In the Cassandra Storage engine there's really not a big difference between a blob and a text, since Cassandra stores text as blobs essentially. And yes the "native" JSON support you speak of is only for when your data model matches your JSON model, and it's only in Cassandra 2.2+.
I would store it as a text type, and you shouldn't have to implement anything to compress your JSON data when sending the data (or handle uncompressing). Since Cassandra's Binary Protocol supports doing transport compression. Also make sure your table is storing the data compressed with the same compression algorithm (I suggest using LZ4 since it's the fastest algo implmeneted) to save on doing compression for each read request. Thus if you configure storing the data compressed and use transport compression, you don't even have to implement either yourself.
You didn't say which Client Driver you're using, but here's the documentation on how to setup Transport Compression for Datastax Java Client Driver.
| Cassandra | 36,342,531 | 25 |
Sorry if this question is somewhat subjective. I am new to 'could store', 'distributed store' or some concepts like this. I really wonder what do they have in common and want to get an overview on all of them. What do I need to prepare if I want to write a product similar to this?
| The NoSQL Database site summarizes the concept like this:
Next Generation Databases mostly
address some of the points: being
non-relational, distributed,
open-source and horizontal scalable.
The original intention has been modern
web-scale databases. The movement
began early 2009 and is growing
rapidly. Often more characteristics
apply as: schema-free, replication
support, easy API, eventually
consistency, and more. So the
misleading term "nosql" (the community
now translates it mostly with "not
only sql") should be seen as an alias
to something like the definition
above.
That site also maintains an archive of articles on NoSQL databases. Most of them seem to focus on particular products but there are some more general overviews. If you are serious about building your own one then Design Patterns for Distributed Non-Relational Databases does a good round up of things you need to consider.
| Cassandra | 2,437,005 | 24 |
I am planning on using ElasticSearch to index my Cassandra database. I am wondering if anyone has seen the practical limits of ElasticSearch. Do things get slow in the petabyte range? Also, has anyone has any problems using ElasticSearch to index Cassandra?
| See this thread from 2011, which mentions ElasticSearch configurations with 1700 shards each of 200GB, which would be in the 1/3 petabyte range. I would expect that the architecture of ElasticSearch would support almost limitless horizontal scalability, because each shard index works separately from all other shards.
The practical limits (which would apply to any other solution as well) include the time needed to actually load that much data in the first place. Managing a Cassandra cluster (or any other distributed datastore) of that size will also involve significant workload just for maintenance, load balancing etc.
| Cassandra | 6,359,382 | 24 |
I want to run www.example.com and api.example.com on same port 80.
This is what I have. All my googles ping lead to the below code. But, this is not working.
server {
listen 80 default_server;
# listen [::]:80 default_server ipv6only=on;
root /var/www/example.com/html/example/app;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name www.example.com www.example.org;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location /bower_components {
alias /var/www/example.com/html/example/bower_components;
}
location /scripts {
alias /var/www/example.com/html/example/scripts;
}
location /content {
alias /var/www/example.com/html/example/content;
}
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:3836;
}
}
server {
listen 80
server_name api.example.com
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:3836;
}
}
I do not know the reason. Any suggestions on this?
| Create separately two files (you don't have to, but it will be much clearer) in /etc/nginx/sites-available/www.example.com and /etc/nginx/sites-available/api.example.com
The api.example.com file's content:
server {
listen 80;
server_name api.example.com;
root /var/www/api.example.com/html/example/app; #also add a root dir here
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:3836;
}
}
The www.example.com's content:
server {
listen 80 default_server;
# listen [::]:80 default_server ipv6only=on;
root /var/www/example.com/html/example/app;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name www.example.com www.example.org;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location /bower_components {
alias /var/www/example.com/html/example/bower_components;
}
location /scripts {
alias /var/www/example.com/html/example/scripts;
}
location /content {
alias /var/www/example.com/html/example/content;
}
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:3836;
}
}
And finally enable the domains:
sudo ln -s /etc/nginx/sites-available/www.example.com /etc/nginx/sites-enabled/www.example.com and sudo ln -s /etc/nginx/sites-available/api.example.com /etc/nginx/sites-enabled/api.example.com
| NGINX | 33,055,212 | 53 |
I'm trying to get Gunicorn to use Python3 for a Django app I want to make. I'm using Digital Ocean's Django image to get started. It comes with Django, Gunicorn, and Nginx installed and configured. The default Django project that comes with this image seems to work fine for Python 2.
I've apt-get'ed these packages.
python3
python3-psycopg2
python3-dev
python3-pip
In order to try to avoid any problems, I've also done this.
pip uninstall django
pip3 install django
I rm -rf'ed the stock project and created a new one with django-admin.py startproject django_project. django-admin.py uses Python 3 (according to the shebang). Later, I use python3 manage.py startapp django_app to create a new app.
At this point, everything works fine. Just like the default app. Then, in django_app/views.py I do this and it breaks.
from django.shortcuts import render
from django.http import HttpResponse
def index(request):
# Python 2 and 3 - works fine
# print('PRINTING')
# Python 3 only - crashes
print(1, 2, end=' ')
return HttpResponse("Hello, world! This is my first view.")
The error page says I'm using Python 2.7.6.
Okay, so then I thought I could install Gunicorn through pip for Python 3, so I do this.
pip uninstall gunicorn
pip3 install gunicorn
But then I just end up with 502 Bad Gateway. When I do service gunicorn status, I get gunicorn stop/waiting. I tried service gunicorn restart, but it still says gunicorn stop/waiting.
I did a which gunicorn and it's installed at /usr/local/bin/gunicorn. Uhh... I'm not really sure what else I could try. Any help would be greatly appreciated. Thanks.
| It seems that there's a package for this called gunicorn3 (this was tested on ubuntu)
sudo apt-get install gunicorn3
then running the following command should work and run gunicorn with python3:
gunicorn3 --log-level debug --bind 0.0.0.0:30443 server:app
| NGINX | 23,928,213 | 53 |
Let's say I have a URL like this: www.example.com/a/b/sth, and I write a location block in Nginx config:
location ^~ /a/b/(?<myvar>[a-zA-Z]+) {
# use variable $myvar here
if ($myvar = "sth") { ... }
}
I hope to be able to use variable $myvar captured from the URL inside the block, however, Nginx keeps telling me this variable is not defined and won't start:
nginx: [emerg] unknown "myvar" variable
| Old thread but I had the same problem...
I think the error isn't related to the PCRE version installed
NGINX doesn't parse your regex if your location tag doesn't start with ~
You need to use something like this
location ~ ^/a/b/(?<myvar>[a-zA-Z]+) {
# use variable $myvar here
if ($myvar = "sth") { ... }
}
| NGINX | 13,706,658 | 53 |
I Have an Angular application. I run the command ng build --prod --aot to generate the dist folder.
In the dist folder I created a file named Staticfile then I uploaded the dist folder to pivotal.io with the following commands:
cf push name-app --no-start
cf start name-app
The app runs well. I have a nav bar, so when I change the path with navbar everything works fine. But when I do it manually (I enter the url myself) I have this error 404 Not Found nginx.
This my app.component.ts:
const appRoutes: Routes = [
{ path: 'time-picker', component: TimePickerComponent },
{ path: 'material-picker', component: MaterialPickerComponent },
{ path: 'about', component: AboutComponent },
{ path: 'login', component: LoginComponent },
{ path: 'registration', component: RegistrationComponent },
{
path: '',
redirectTo: '/time-picker',
pathMatch: 'full'
}
];
@NgModule({
declarations: [
AppComponent,
TimePickerComponent,
MaterialPickerComponent,
DurationCardComponent,
AboutComponent,
LoginComponent,
RegistrationComponent,
],
imports: [RouterModule.forRoot(
appRoutes
// ,{ enableTracing: true } // <-- debugging purposes only
),
FormsModule,
BrowserAnimationsModule,
BrowserModule,
MdCardModule, MdDatepickerModule, MdNativeDateModule, MdInputModule
],
providers: [{ provide: DateAdapter, useClass: CustomDateAdapter }],
bootstrap: [AppComponent]
})
export class AppModule {
constructor(private dateAdapter: DateAdapter<Date>) {
this.dateAdapter.setLocale('fr-br');
}
}
| for those not using a Staticfile might wanna try this.
I had the same problem with nginx serving angular.
The following is the default config file, probably found somewhere in /etc/nginx/sites-available/yoursite
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
but what we acctually want is...
location / {
# First attempt to serve request as file, then
# as directory, then redirect to index(angular) if no file found.
try_files $uri $uri/ /index.html;
}
| NGINX | 45,858,957 | 52 |
I am deploying a Rails 4 app to a Fedora 19 x64 server using Nginx and Unicorn. The problem is that I get an error when visiting the address: "We're sorry, but something went wrong."
My Nginx error log (/var/log/nginx/error.log) shows:
2014/03/08 03:50:12 [warn] 23934#0: conflicting server name "localhost" on 0.0.0.0:80, ignored
2014/03/08 03:50:12 [warn] 23936#0: conflicting server name "localhost" on 0.0.0.0:80, ignored
2014/03/08 03:50:14 [crit] 23939#0: *1 connect() to unix:/tmp/unicorn.[app name].sock failed (2: No such file or directory) while connecting to upstream, client: [client IP], server: localhost, request: "GET /v1/industries/1.xml HTTP/1.1", upstream: "http://unix:/tmp/unicorn.[app name].sock:/v1/industries.json", host: "api.[app name].ca"
As far as I can see from this, Nginx is not aware that the socket exists. However, looking in /tmp, it does:
[root@localhost tmp]# ls
unicorn.[app name].sock
I keep getting stuck at this point, no matter how I modify my Unicorn config file or my Nginx config file. Both are attacted:
/var/www/[app name]/config/unicorn.rb:
working_directory "/var/www/[app name]"
pid "/var/www/[app name]/pids/unicorn.pid"
stderr_path "/var/www/[app name]/log/unicorn.log"
stdout_path "/var/www/[app name]/log/unicorn.log"
listen "/tmp/unicorn.[app name].sock"
worker_processes 2
timeout 30
/etc/nginx/conf.d/default.conf:
upstream app {
server unix:/tmp/unicorn.[app name].sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /var/www/[app name]/public;
try_files $uri/index.html $uri @app;
location @app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
The way I have been starting these two daemons is as follows:
unicorn_rails -c /var/www/[app name]/config/unicorn.rb -D -E production
service nginx start
The Unicorn logs contain no relevant information, nor do the production logs. This setup seems straight forward, has anyone experienced this before? Thanks for any help you can provide.
By the way, I was initially following this tutorial: https://www.digitalocean.com/community/articles/how-to-deploy-rails-apps-using-unicorn-and-nginx-on-centos-6-5
| After many hours and a grand total of 3 beers, I've managed to figure out the problem. After hours of digging, I finally came across this Server Fault answer
In layman terms, it appears that programs that create files in /tmp (or /var/tmp as I have discovered) are the only programs that are able to see the files in that directory. Unicorn was creating the UNIX socket file, however Nginx could not see it.
The solution I have employed is to have Unicorn create sockets in /var/sockets.
| NGINX | 22,272,943 | 52 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.