question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
How would I have multiple $or operations? So far I've tried the following but it silently ignores the 2nd $or. { $or: [{a: 2}, {a: 3}], $or: [{b: 5}, {b: 4}] } I assume this is because I'm using two identical keys. Is there any way around this?
Mongo 2.0 added an $and operator, so you can do a query like this: db.things.find({$and: [{$or : [{'a':1},{'b':2}]},{$or : [{'a':2},{'b':3}]}] }) http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24and
MongoDB
5,150,061
90
In MongoDB, if I were to store an array (say ["red", "blue"]) in a field "color", does it index "red" and "blue" so I could query for "red", for example, or does in make {"red", "blue"} a composite index?
When it comes to indexing arrays, MongoDB indexes each value of the array so you can query for individual items, such as "red." For example: > db.col1.save({'colors': ['red','blue']}) > db.col1.ensureIndex({'colors':1}) > db.col1.find({'colors': 'red'}) { "_id" : ObjectId("4ccc78f97cf9bdc2a2e54ee9"), "colors" : [ "red", "blue" ] } > db.col1.find({'colors': 'blue'}) { "_id" : ObjectId("4ccc78f97cf9bdc2a2e54ee9"), "colors" : [ "red", "blue" ] } For more information, check out MongoDB's documentation on Multikeys: http://www.mongodb.org/display/DOCS/Multikeys
MongoDB
4,059,126
90
The following is my MongoDB connection dial from GoLang. But it's returning a panic "server returned error on SASL authentication step: Authentication failed.". My username, password, hostAddrs and dbName are correct. What am I missing here? dbName: = os.Getenv("ENV_DBNAME") userName: = os.Getenv("ENV_DBUSER") password: = os.Getenv("ENV_DBPASS") dbHost: = os.Getenv("ENV_DBHOST") mongoDialInfo: = & mgo.DialInfo { Addrs: [] string { dbHost }, Database: dbName, Username: userName, Password: password, Timeout: 60 * time.Second, } sess, err: = mgo.DialWithInfo(mongoDialInfo) if (err != nil) { panic(err) }
I faced similar error and added --authenticationDatabase parameter and it worked while we connecting to a remote MongoDB Use the similar below format in your code : $mongorestore --host databasehost:98761 --username restoreuser --password restorepwd --authenticationDatabase admin --db targetdb ./path/to/dump/
MongoDB
38,744,131
89
MongoDB bulk operations have two options: Bulk.find.updateOne() Adds a single document update operation to a bulk operations list. The operation can either replace an existing document or update specific fields in an existing document. Bulk.find.replaceOne() Adds a single document replacement operation to a bulk operations list. Use the Bulk.find() method to specify the condition that determines which document to replace. The Bulk.find.replaceOne() method limits the replacement to a single document. According to the documentation, both of these two methods can replace a matching document. Do I understand correctly, that updateOne() is more general purpose method, which can either replace the document exactly like replaceOne() does, or just update its specific fields?
With replaceOne() you can only replace the entire document, while updateOne() allows for updating fields. Since replaceOne() replaces the entire document - fields in the old document not contained in the new will be lost. With updateOne() new fields can be added without losing the fields in the old document. For example if you have the following document: { "_id" : ObjectId("0123456789abcdef01234567"), "my_test_key3" : 3333 } Using: replaceOne({"_id" : ObjectId("0123456789abcdef01234567")}, { "my_test_key4" : 4}) results in: { "_id" : ObjectId("0123456789abcdef01234567"), "my_test_key4" : 4.0 } Using: updateOne({"_id" : ObjectId("0123456789abcdef01234567")}, {$set: { "my_test_key4" : 4}}) results in: { "_id" : ObjectId("0123456789abcdef01234567"), "my_test_key3" : 3333.0, "my_test_key4" : 4.0 } Note that with updateOne() you can use the update operators on documents.
MongoDB
35,848,688
89
I have a collection of documents : { "networkID": "myNetwork1", "pointID": "point001", "param": "param1" } { "networkID": "myNetwork2", "pointID": "point002", "param": "param2" } { "networkID": "myNetwork1", "pointID": "point003", "param": "param3" } ... pointIDs are unique but networkIDs are not. Is it possible to query Mongodb in such a way that the result will be : [myNetwork1,myNetwork2] right now I only managed to return [myNetwork1,myNetwork2,myNetwork1] I need a list of unique networkIDs to populate an autocomplete select2 component. As I may have up to 50K documents I would prefer mongoDb to filter the results at the query level.
I think you can use db.collection.distinct(field, query, options) mongosh method as documented here You will be able to get the distinct values in your case for NetworkID. The options argument (and even the query argument) are optional, so you can specify just the field argument as 'NetworkID' It should be something like this : db.collection.distinct('NetworkID') Or as @VagnerWentz asks you can specify a query/filter in second argument using normal mongo syntax. Suppose you have a field called PaymentDate: db.collection.distinct('NetworkID', { PaymentDate: { $gt: ... } }) The distinct shell method only returns a list of distinct values in the field given ('NetworkID'). This means duplicate "rows" are dropped. If you want a "full record" (vs just 'NetworkID'), you need to use a different technique; mongo "aggregation" with the $group operator (akin to a SQL "GROUP BY" technique). Then decide which "accumulator operator" to use to pick a row for each distinct 'NetworkID'. Will you use the $first? the $max? Or compile them into a new list with $push/$addToArray ?
MongoDB
28,155,857
89
If someone can provide some insights here I would GREATLY appreciate it. I had a express/node.js app running on MongoDB locally successfully, but upon restarting my computer, I attempted to restart the Mongo server and it began giving errors and wouldn't start. Since then, I have re-installed Mongo several times only to find the same error occurring. this is what I am receiving: privee:mongodb-osx-x86_64-2.4.6 jonlinton$ ./bin/mongo MongoDB shell version: 2.4.6 connecting to: test Mon Aug 26 14:48:47.168 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145 exception: connect failed Am I missing a step? Should I be creating a config file?
If you have installed mongodb through homebrew then you can simply start mongodb through (mongodb-community if installted mongodb-community brew services start mongodb OR brew services start mongodb-community Then access the shell by mongo You can shut down your db by brew services stop mongodb You can restart your db by brew services restart mongodb For more options brew info mongodb
MongoDB
18,452,023
89
I'm doing a python script that writes some data to a mongodb. I need to close the connection and free some resources, when finishing. How is that done in Python?
Use close() method on your MongoClient instance: client = pymongo.MongoClient() # some code here client.close() Cleanup client resources and disconnect from MongoDB. End all server sessions created by this client by sending one or more endSessions commands. Close all sockets in the connection pools and stop the monitor threads.
MongoDB
18,401,015
89
I have over 300k records in one collection in Mongo. When I run this very simple query: db.myCollection.find().limit(5); It takes only few miliseconds. But when I use skip in the query: db.myCollection.find().skip(200000).limit(5) It won't return anything... it runs for minutes and returns nothing. How to make it better?
One approach to this problem, if you have large quantities of documents and you are displaying them in sorted order (I'm not sure how useful skip is if you're not) would be to use the key you're sorting on to select the next page of results. So if you start with db.myCollection.find().limit(100).sort({created_date:true}); and then extract the created date of the last document returned by the cursor into a variable max_created_date_from_last_result, you can get the next page with the far more efficient (presuming you have an index on created_date) query db.myCollection.find({created_date : { $gt : max_created_date_from_last_result } }).limit(100).sort({created_date:true});
MongoDB
7,228,169
89
I would like to connect to the database specified in the connection string, without specifying it again in GetDatabase. For example, if I have a connection string like this; mongodb://localhost/mydb I would like to be able to db.GetCollection("mycollection") from mydb. This would allow the database name to be configured easily in the app.config file.
Update: MongoServer.Create is obsolete now (thanks to @aknuds1). Instead this use following code: var _server = new MongoClient(connectionString).GetServer(); It's easy. You should first take database name from connection string and then get database by name. Complete example: var connectionString = "mongodb://localhost:27020/mydb"; //take database name from connection string var _databaseName = MongoUrl.Create(connectionString).DatabaseName; var _server = MongoServer.Create(connectionString); //and then get database by database name: _server.GetDatabase(_databaseName); Important: If your database and auth database are different, you can add a authSource= query parameter to specify a different auth database. (thank you to @chrisdrobison) From docs: NOTE If you are using the database segment as the initial database to use, but the username and password specified are defined in a different database, you can use the authSource option to specify the database in which the credential is defined. For example, mongodb://user:pass@hostname/db1?authSource=userDb would authenticate the credential against the userDb database instead of db1.
MongoDB
7,201,847
89
I'm new to mongodb and am trying to query child objects. I have a collection of States, and each State has child Cities. One of the Cities has a Name property that is null, which is causing errors in my app. How would I query the State collections to find child Cities that have a name == null?
If it is exactly null (as opposed to not set): db.states.find({"cities.name": null}) (but as javierfp points out, it also matches documents that have no cities array at all, I'm assuming that they do). If it's the case that the property is not set: db.states.find({"cities.name": {"$exists": false}}) I've tested the above with a collection created with these two inserts: db.states.insert({"cities": [{name: "New York"}, {name: null}]}) db.states.insert({"cities": [{name: "Austin"}, {color: "blue"}]}) The first query finds the first state, the second query finds the second. If you want to find them both with one query you can make an $or query: db.states.find({"$or": [ {"cities.name": null}, {"cities.name": {"$exists": false}} ]})
MongoDB
4,762,947
89
I am preparing to build an Android/iOS app that will require me to make complex polygon and containment geospatial queries. I like Apache Cassandra's no single point of failure, fault tolerance and data center awareness. Cassandra does not have direct support for geospatial queries (that I am aware of) but MongoDB and Couchbase Server do. MongoDB has scaling issues and I'm not sure if Couchbase would be a better alternative than Cassandra with Solr or Elasticsearch. Would I be making a mistake by going with Datastax Enterprise (DSE), Cassandra and Elasticsearch over Couchbase Server? Will there be a noticeable difference in load times for web pages with the Cassandra/ES back end vs. Couchbase?
Aerospike just released Server Community Edition 3.7.0, which includes Geospatial Indexes as a feature. Aerospike can now store GeoJSON objects and execute various queries, allowing an application to track rapidly changing Geospatial objects or simply ask the question of “what’s near me”. Internally, we use Google’s S2 library and Geo Hashing to encode and index these points and regions. The following types of queries are supported: Points within a Region Points within a Radius Regions a Point is in This can be combined with a User-Defined Function (UDF) to filter the results – i.e., to further refine the results to only include Bars, Restaurants or Places of Worship near you – even ones that are currently open or have availability. Additionally, finding the Region a point is in allows, for example, an advertiser to figure out campaign regions that the mobile user is in – and therefore place a geospatially targeted advertisement. Internally, the same storage mechanisms are used, which enables highly concurrent reads and writes to the Geospatial data or other data held on the record. Geospatial data is a lot of fun to play around with, so we have included a set of examples based on Open Street Map and Yelp Dataset Challenge data. Geospatial is an Experimental feature in the 3.7.0 release. It’s meant for developers to try out and provide feedback. We think the APIs are good, but in an experimental feature, based on the feedback from the community, Aerospike may choose to modify these APIs by the time this feature is GA. It’s not intended for Production usage right now (though we know some developers will go directly to Production ...)
Cassandra
24,121,192
11
I have had issues with spark-cassandra-connector (1.0.4, 1.1.0) when writing batches of 9 millions rows to a 12 nodes cassandra (2.1.2) cluster. I was writing with consistency ALL and reading with consistency ONE but the number of rows read was every time different from 9 million (8.865.753, 8.753.213 etc.). I've checked the code of the connector and found no issues. Then, I decided to write my own application, independent from spark and the connector, to investigate the problem (the only dependency is datastax-driver-code version 2.1.3). The full code, the startup scripts and the configuration files can now be found on github. In pseudo-code, I wrote two different version of the application, the sync one: try (Session session = cluster.connect()) { String cql = "insert into <<a table with 9 normal fields and 2 collections>>"; PreparedStatement pstm = session.prepare(cql); for(String partitionKey : keySource) { // keySource is an Iterable<String> of partition keys BoundStatement bound = pstm.bind(partitionKey /*, << plus the other parameters >> */); bound.setConsistencyLevel(ConsistencyLevel.ALL); session.execute(bound); } } And the async one: try (Session session = cluster.connect()) { List<ResultSetFuture> futures = new LinkedList<ResultSetFuture>(); String cql = "insert into <<a table with 9 normal fields and 2 collections>>"; PreparedStatement pstm = session.prepare(cql); for(String partitionKey : keySource) { // keySource is an Iterable<String> of partition keys while(futures.size()>=10 /* Max 10 concurrent writes */) { // Wait for the first issued write to terminate ResultSetFuture future = futures.get(0); future.get(); futures.remove(0); } BoundStatement bound = pstm.bind(partitionKey /*, << plus the other parameters >> */); bound.setConsistencyLevel(ConsistencyLevel.ALL); futures.add(session.executeAsync(bound)); } while(futures.size()>0) { // Wait for the other write requests to terminate ResultSetFuture future = futures.get(0); future.get(); futures.remove(0); } } The last one is similar to that used by the connector in the case of no-batch configuration. The two versions of the application work the same in all circumstances, except when the load is high. For instance, when running the sync version with 5 threads on 9 machines (45 threads) writing 9 millions rows to the cluster, I find all the rows in the subsequent read (with spark-cassandra-connector). If I run the async version with 1 thread per machine (9 threads), the execution is much faster but I cannot find all the rows in the subsequent read (the same problem that arised with the spark-cassandra-connector). No exception was thrown by the code during the executions. What could be the cause of the issue ? I add some other results (thanks for the comments): Async version with 9 threads on 9 machines, with 5 concurrent writers per thread (45 concurrent writers): no issues Sync version with 90 threads on 9 machines (10 threads per JVM instance): no issues Issues seemed start arising with Async writes and a number of concurrent writers > 45 and <=90, so I did other tests to ensure that the finding were right: Replaced the "get" method of ResultSetFuture with "getUninterruptibly": same issues. Async version with 18 threads on 9 machines, with 5 concurrent writers per thread (90 concurrent writers): no issues. The last finding shows that the high number of concurrent writers (90) is not an issue as was expected in the first tests. The problem is the high number of async writes using the same session. With 5 concurrent async writes on the same session the issue is not present. If I increase to 10 the number of concurrent writes, some operations get lost without notification. It seems that the async writes are broken in Cassandra 2.1.2 (or the Cassandra Java driver) if you issue multiple (>5) writes concurrently on the same session.
Nicola and I communicated over email this weekend and thought I'd provide an update here with my current theory. I took a look at the github project Nicola shared and experimented with an 8 node cluster on EC2. I was able to reproduce the issue with 2.1.2, but did observe that after a period of time I could re-execute the spark job and all 9 million rows were returned. What I seemed to notice was that while nodes were under compaction I did not get all 9 million rows. On a whim I took a look at the change log for 2.1 and observed an issue CASSANDRA-8429 - "Some keys unreadable during compaction" that may explain this problem. Seeing that the issue has been fixed at is targeted for 2.1.3, I reran the test against the cassandra-2.1 branch and ran the count job while compaction activity was happening and got 9 million rows back. I'd like to experiment with this some more since my testing with the cassandra-2.1 branch was rather limited and the compaction activity may have been purely coincidental, but I'm hoping this may explain these issues.
Cassandra
27,667,228
11
I have a table with a column of type list and I would like to check if there is an item inside the list, using CONTAINS keyword. According to scylla documentation: The CONTAINS operator may only be used on collection columns (lists, sets, and maps). In the case of maps, CONTAINS applies to the map values. The CONTAINS KEY operator may only be used on map columns and applies to the map keys. https://docs.scylladb.com/getting-started/dml/ To reproduce the error I am receiving do the following: CREATE TABLE test.persons ( id int PRIMARY KEY,lastname text, books list<text>); INSERT INTO test.persons(id, lastname, books) values (1, 'Testopoulos',['Dracula','1984']); SELECT * FROM test.persons id | books | lastname ----+---------------------+------------- 1 | ['Dracula', '1984'] | Testopoulos (1 rows) SELECT * FROM test.persons WHERE books CONTAINS '1984' ALLOW FILTERING; InvalidRequest: Error from server: code=2200 [Invalid query] message="Collection filtering is not supported yet"
Support for CONTAINS keyword for filtering is already implemented in Scylla, but it's not part of any official release yet - it will be included in the upcoming 3.1 release (or, naturally, if you build it yourself from the newest source). Here's the reference from the official tracker: https://github.com/scylladb/scylla/issues/3573
Cassandra
57,874,319
11
I want to use batch statement to delete a row from 3 tables in my database to ensure atomicity. The partition key is going to be the same in all the 3 tables. In all the examples that I read about batch statements, all the queries were for a single table? In my case, is it a good idea to use batch statements? Or, should I avoid it? I'm using Cassandra-3.11.2 and I execute my queries using the C++ driver.
Yes, you can use batch to ensure atomicity. Single partition batches are faster (same table and same partition key) but only for a limited number of partitions (in your case three) it is okay. But don't use it for performance optimization (Ex: reduce of multiple requests). If you need atomicity you can use it. You can check below links: Cassandra batch query performance on tables having different partition keys Cassandra batch query vs single insert performance How single parition batch in cassandra function for multiple column update? EDITED In my case, the tables are different but the partition key is the same in all 3 tables. So is this a special case of single partition batch or is it something entirely different. For different tables partitions are also different. So this is a multi partition batch. LOGGED batches are used to ensure atomicity for different partitions (different tables or different partition keys). UNLOGGED batches are used to ensure atomicity and isolation for single partition batch. If you use UNLOGGED batch for multi partition batch atomicity will not be ensured. Default is LOGGED batch. For single partition batch default is UNLOGGED. Cause single partition batch is considered as single row mutation. For single row update, there is no need of using LOGGED batch. To know about LOGGED or UNLOGGED batch, I have shared a link below. Multi partition batches should only be used to achieve atomicity for a few writes on different tables. Apart from this they should be avoided because they’re too expensive. Single partition batches can be used to achieve atomicity and isolation. They’re not much more expensive than normal writes. But you can use multi partition LOGGED batch as partitions are limited. A very useful Doc in Batch and all the details are provided. If you read this, all the confusions will be cleared. Cassandra - to BATCH or not to BATCH Partition Key tokens vs row partition Table partitions and partition key tokens are different. Partition key is used to decide which node the data resides. For same row key partition tokens are same thus resides in the same node. For different partition key or same key different tables they are different row mutation. You cannot get data with one query for different partition keys or from different tables even if for the same key. Coordinator nodes have to treat it as different request or mutation and request the actual data from replicated nodes separately. It's the internal structure of how C* stores data. Every table even has it's own directory structure making it clear that a partition from one table will never interact with the partition of another. Does the same partition key in different cassandra tables add up to cell theoretical limit? To know details how C* maps data check this link: Understanding How CQL3 Maps to Cassandra's Internal Data Structure
Cassandra
49,356,986
11
I understand in Mongo we can have one master and multiple slaves where the master will be used for writes and slaves will be used for reading operations. Say M1, M2, M3 are nodes with M1 as master But I read Cassandra is said to be a master-less model. Every node is said to be master. I did not get what it means? Say M1, M2, M3 are nodes with M1 as master and M2, M3 are slaves in Mongo I believe write will always go M1 and read will always go to M2, M3 Say C1, C2, C3 are nodes in Cassandra Here I believe write and Read request can go to any node. That's why it is called master-less model.
You are right, the nodes in Cassandra are equal and all of them can respond to user's query. That's because Cassandra picks Availability and Partition Tolerance in the CAP Theorem (whereas MongoDB picks Consistency and Partition Tolerance). And Cassandra can have linear scalability by simply adding new nodes into the node ring to handle more data. The tradeoff here is the consistency problem. In Cassandra, they provide a solution called Replication Factor and Consistency Level to ensure the consistency as much as possible while maintaining strong availability. Here is a good explanation of how read, write and replication work in Cassandra: brief-introduction-apache-cassandra
Cassandra
48,434,860
11
I have a 5 node cluster of Cassandra, with ~650 GB of data on each node involving a replication factor of 3. I have recently started seeing the following error in /var/log/cassandra/system.log. INFO [ReadStage-5] 2017-10-17 17:06:07,887 NoSpamLogger.java:91 - Maximum memory usage reached (1.000GiB), cannot allocate chunk of 1.000MiB I have attempted to increase the file_cache_size_in_mb, but sooner rather than later this same error catches up. I have tried to go as high as 2GB for this parameter, but to no avail. When the error happens, the CPU utilisation soars and the read latencies are terribly erratic. I see this surge show up approximated every 1/2 hour. Note the timings in the list below. INFO [ReadStage-5] 2017-10-17 17:06:07,887 NoSpamLogger.java:91 - Maximum memory usage reached (1.000GiB), cannot allocate chunk of 1.000MiB INFO [ReadStage-36] 2017-10-17 17:36:09,807 NoSpamLogger.java:91 - Maximum memory usage reached (1.000GiB), cannot allocate chunk of 1.000MiB INFO [ReadStage-15] 2017-10-17 18:05:56,003 NoSpamLogger.java:91 - Maximum memory usage reached (2.000GiB), cannot allocate chunk of 1.000MiB INFO [ReadStage-28] 2017-10-17 18:36:01,177 NoSpamLogger.java:91 - Maximum memory usage reached (2.000GiB), cannot allocate chunk of 1.000MiB Two of the tables that I have are partitioned by hour, and the partitions are large. Ex. Here are their outputs from nodetool table stats Read Count: 4693453 Read Latency: 0.36752741680805157 ms. Write Count: 561026 Write Latency: 0.03742310516803143 ms. Pending Flushes: 0 Table: raw_data SSTable count: 55 Space used (live): 594395754275 Space used (total): 594395754275 Space used by snapshots (total): 0 Off heap memory used (total): 360753372 SSTable Compression Ratio: 0.20022598072758296 Number of keys (estimate): 45163 Memtable cell count: 90441 Memtable data size: 685647925 Memtable off heap memory used: 0 Memtable switch count: 1 Local read count: 0 Local read latency: NaN ms Local write count: 126710 Local write latency: 0.096 ms Pending flushes: 0 Percent repaired: 52.99 Bloom filter false positives: 167775 Bloom filter false ratio: 0.16152 Bloom filter space used: 264448 Bloom filter off heap memory used: 264008 Index summary off heap memory used: 31060 Compression metadata off heap memory used: 360458304 Compacted partition minimum bytes: 51 **Compacted partition maximum bytes: 3449259151** Compacted partition mean bytes: 16642499 Average live cells per slice (last five minutes): 1.0005435888450147 Maximum live cells per slice (last five minutes): 42 Average tombstones per slice (last five minutes): 1.0 Maximum tombstones per slice (last five minutes): 1 Dropped Mutations: 0 Read Count: 4712814 Read Latency: 0.3356051004771247 ms. Write Count: 643718 Write Latency: 0.04168356951335834 ms. Pending Flushes: 0 Table: customer_profile_history SSTable count: 20 Space used (live): 9423364484 Space used (total): 9423364484 Space used by snapshots (total): 0 Off heap memory used (total): 6560008 SSTable Compression Ratio: 0.1744084338623116 Number of keys (estimate): 69 Memtable cell count: 35242 Memtable data size: 789595302 Memtable off heap memory used: 0 Memtable switch count: 1 Local read count: 2307 Local read latency: NaN ms Local write count: 51772 Local write latency: 0.076 ms Pending flushes: 0 Percent repaired: 0.0 Bloom filter false positives: 0 Bloom filter false ratio: 0.00000 Bloom filter space used: 384 Bloom filter off heap memory used: 224 Index summary off heap memory used: 400 Compression metadata off heap memory used: 6559384 Compacted partition minimum bytes: 20502 **Compacted partition maximum bytes: 4139110981** Compacted partition mean bytes: 708736810 Average live cells per slice (last five minutes): NaN Maximum live cells per slice (last five minutes): 0 Average tombstones per slice (last five minutes): NaN Maximum tombstones per slice (last five minutes): 0 Dropped Mutations: 0 Here goes: cdsdb/raw_data histograms Percentile SSTables Write Latency Read Latency Partition Size Cell Count (micros) (micros) (bytes) 50% 0.00 61.21 0.00 1955666 642 75% 1.00 73.46 0.00 17436917 4768 95% 3.00 105.78 0.00 107964792 24601 98% 8.00 219.34 0.00 186563160 42510 99% 12.00 315.85 0.00 268650950 61214 Min 0.00 6.87 0.00 51 0 Max 14.00 1358.10 0.00 3449259151 7007506 cdsdb/customer_profile_history histograms Percentile SSTables Write Latency Read Latency Partition Size Cell Count (micros) (micros) (bytes) 50% 0.00 73.46 0.00 223875792 61214 75% 0.00 88.15 0.00 668489532 182785 95% 0.00 152.32 0.00 1996099046 654949 98% 0.00 785.94 0.00 3449259151 1358102 99% 0.00 943.13 0.00 3449259151 1358102 Min 0.00 24.60 0.00 5723 4 Max 0.00 5839.59 0.00 5960319812 1955666 Could you please suggest a way forward to mitigate this issue?
Based on the cfhistograms output posted, the partitions are enormous. 95% percentile of raw_data table has partition size of 107MB and max of 3.44GB. 95% percentile of customer_profile_history has partition size of 1.99GB and max of 5.96GB. This clearly relates to the problem you notice every half-hour as these huge partitions are written to the sstable. The data-model has to change and based on the partition size above its better to have a partition interval as "minute" instead of "hour". So a 2GB partition would reduce to 33MB partition. Recommended partition size is to keep it as close to 100MB maximum. Though theoretically we can store more than 100MB, the performance is going to suffer. Remember every read of that partition is over 100MB of data through the wire. In your case, its over 2GB and hence all the performance implications along with it.
Cassandra
46,796,021
11
I have seen this warning everywhere but cannot find any detailed explanation on this topic.
For starters The maximum number of cells (rows x columns) in a single partition is 2 billion. If you allow a partition to grow unbounded you will eventually hit this limitation. Outside that theoretical limit, there are practical limitations tied to the impacts large partitions have on the JVM and read times. These practical limitations are constantly increasing from version to version. This practical limitation is not fixed but variable with data model, query patterns, heap size, and configurations which makes it hard to be give a straight answer on whats too large. As of 2.1 and early 3.0 releases, the primary cost on reads and compactions comes from deserializing the index which marks a row every column_index_size_in_kb. You can increase the key_cache_size_in_mb for reads to prevent unnecessary deserialization but that reduces heap space and fills old gen. You can increase the column index size but it will increase worst case IO costs on reads. Theres also many different settings for CMS and G1 to tune the impact of a huge spike in object allocations when reading these big partitions. There are active efforts on improving this so in the future it might no longer be the bottleneck. Repairs also only go down to (in best case scenario) the partition level. So if say you are constantly appending to a partition, and a hash of that partition on 2 nodes are compared at not an exact time (distributed system essentially guarantees this), the entire partition must be streamed over to ensure consistency. Incremental repairs can reduce impact of this, but your still streaming massive amounts of data and fluctuating disk significantly which will then need to be compacted together unnecessarily. You can probably keep adding onto this of corner cases and scenarios that have issues. Many times large partitions are possible to read, but the tuning and corner cases involved in them are not really worth it, better to just design data model to be friendly with how Cassandra expects it. I would recommend targeting 100mb but you can go far beyond that comfortably. Into the Gbs and you will need to start consider tuning for it (depending on data model, use case etc).
Cassandra
46,272,571
11
I'm always getting the following error.Can somebody help me please? Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/Logging at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at com.datastax.spark.connector.japi.DStreamJavaFunctions.<init>(DStreamJavaFunctions.java:24) at com.datastax.spark.connector.japi.CassandraStreamingJavaUtil.javaFunctions(CassandraStreamingJavaUtil.java:55) at SparkStream.main(SparkStream.java:51) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) Caused by: java.lang.ClassNotFoundException: org.apache.spark.Logging at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 20 more When I compile the following code. I've searched the web but didn't find a solution. I've got the error when I added the saveToCassandra. import com.datastax.spark.connector.japi.CassandraStreamingJavaUtil; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.streaming.Duration; import org.apache.spark.streaming.api.java.JavaDStream; import org.apache.spark.streaming.api.java.JavaPairInputDStream; import org.apache.spark.streaming.api.java.JavaStreamingContext; import org.apache.spark.streaming.kafka.KafkaUtils; import java.io.Serializable; import java.util.Collections; import java.util.HashMap; import java.util.Map; import java.util.Set; import static com.datastax.spark.connector.japi.CassandraJavaUtil.mapToRow; /** * Created by jonas on 10/10/16. */ public class SparkStream implements Serializable{ public static void main(String[] args) throws Exception{ SparkConf conf = new SparkConf(true) .setAppName("TwitterToCassandra") .setMaster("local[*]") .set("spark.cassandra.connection.host", "127.0.0.1") .set("spark.cassandra.connection.port", "9042"); ; JavaSparkContext sc = new JavaSparkContext(conf); JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(5000)); Map<String, String> kafkaParams = new HashMap<>(); kafkaParams.put("bootstrap.servers", "localhost:9092"); Set<String> topics = Collections.singleton("Test"); JavaPairInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream( ssc, String.class, String.class, kafka.serializer.StringDecoder.class, kafka.serializer.StringDecoder.class, kafkaParams, topics ); JavaDStream<Tweet> createTweet = directKafkaStream.map(s -> createTweet(s._2)); CassandraStreamingJavaUtil.javaFunctions(createTweet) .writerBuilder("mykeyspace", "rawtweet", mapToRow(Tweet.class)) .saveToCassandra(); ssc.start(); ssc.awaitTermination(); } public static Tweet createTweet(String rawKafka){ String[] splitted = rawKafka.split("\\|"); Tweet t = new Tweet(splitted[0], splitted[1], splitted[2], splitted[3]); return t; } } My pom is the following. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.company</groupId> <artifactId>Sentiment</artifactId> <version>1.0-SNAPSHOT</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build> <repositories> <repository> <id>twitter4j.org</id> <name>twitter4j.org Repository</name> <url>http://twitter4j.org/maven2</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.11</artifactId> <version>2.0.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>2.0.0</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>2.0.0</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka-0-8_2.11</artifactId> <version>2.0.1</version> </dependency> <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-library --> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>2.11.8</version> </dependency> <!-- https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector_2.10 --> <dependency> <groupId>com.datastax.spark</groupId> <artifactId>spark-cassandra-connector_2.10</artifactId> <version>1.6.2</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.10</artifactId> <version>0.9.0.0</version> </dependency> <dependency> <groupId>org.twitter4j</groupId> <artifactId>twitter4j-core</artifactId> <version>[4.0,)</version> </dependency> <dependency> <groupId>org.twitter4j</groupId> <artifactId>twitter4j-stream</artifactId> <version>4.0.4</version> </dependency> <dependency> <groupId>org.twitter4j</groupId> <artifactId>twitter4j-async</artifactId> <version>4.0.4</version> </dependency> </dependencies> </project>
org.apache.spark.Logging is available in Spark version 1.5.2 or lower version. It is not in the 2.0.0. Pls change versions as follows <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.11</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka-0-8_2.11</artifactId> <version>1.6.2</version> </dependency>
Cassandra
40,287,289
11
I am using Spark to consume data from Kafka and save it in Cassandra. My program is written in Java. I am using the spark-streaming-kafka_2.10:1.6.2 lib to accomplish this. My code is: SparkConf sparkConf = new SparkConf().setAppName("name"); JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000)); Map<String,String> kafkaParams = new HashMap<>(); kafkaParams.put("zookeeper.connect", "127.0.0.1"); kafkaParams.put("group.id", App.GROUP); JavaPairReceiverInputDStream<String, EventLog> messages = KafkaUtils.createStream(jssc, String.class, EventLog.class, StringDecoder.class, EventLogDecoder.class, kafkaParams, topicMap, StorageLevel.MEMORY_AND_DISK_SER_2()); JavaDStream<EventLog> lines = messages.map(new Function<Tuple2<String, EventLog>, EventLog>() { @Override public EventLog call(Tuple2<String, EventLog> tuple2) { return tuple2._2(); } }); lines.foreachRDD(rdd -> { javaFunctions(rdd).writerBuilder("test", "event_log", mapToRow(EventLog.class)).saveToCassandra(); }); jssc.start(); In my Cassandra table event_log, there is a column named offsetid to store the offset ID of the stream. How do I get the offset id till where this stream has read the Kafka stream and store it in Cassandra? After saving it in Cassandra, I want to use the latest offset id to be used when Spark is started again. How do I do that?
Below is the code for reference you may need to change the things as per your requirement. What I have done with the code and approach is that maintain Kafka partition wise offset for each topic in Cassandra(This can be done in zookeeper also as a suggestion using its java api). Store or update the the latest offset range for the topic with each string message received, in EventLog table. So always retrieve from table and see if present, then create direct stream from that offset, otherwise fresh direct stream. package com.spark; import static com.datastax.spark.connector.japi.CassandraJavaUtil.javaFunctions; import static com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowTo; import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import kafka.common.TopicAndPartition; import kafka.message.MessageAndMetadata; import kafka.serializer.StringDecoder; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.function.Function; import org.apache.spark.streaming.Duration; import org.apache.spark.streaming.api.java.JavaDStream; import org.apache.spark.streaming.api.java.JavaStreamingContext; import org.apache.spark.streaming.kafka.HasOffsetRanges; import org.apache.spark.streaming.kafka.KafkaUtils; import org.apache.spark.streaming.kafka.OffsetRange; import scala.Tuple2; public class KafkaChannelFetchOffset { public static void main(String[] args) { String topicName = "topicName"; SparkConf sparkConf = new SparkConf().setAppName("name"); JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000)); HashSet<String> topicsSet = new HashSet<String>(Arrays.asList(topicName)); HashMap<TopicAndPartition, Long> kafkaTopicPartition = new HashMap<TopicAndPartition, Long>(); Map<String, String> kafkaParams = new HashMap<>(); kafkaParams.put("zookeeper.connect", "127.0.0.1"); kafkaParams.put("group.id", "GROUP"); kafkaParams.put("metadata.broker.list", "127.0.0.1"); List<EventLog> eventLogList = javaFunctions(jssc).cassandraTable("test", "event_log", mapRowTo(EventLog.class)) .select("topicName", "partion", "fromOffset", "untilOffset").where("topicName=?", topicName).collect(); JavaDStream<String> kafkaOutStream = null; if (eventLogList == null || eventLogList.isEmpty()) { kafkaOutStream = KafkaUtils.createDirectStream(jssc, String.class, String.class, StringDecoder.class, StringDecoder.class, kafkaParams, topicsSet).transform(new Function<JavaPairRDD<String, String>, JavaRDD<String>>() { @Override public JavaRDD<String> call(JavaPairRDD<String, String> pairRdd) throws Exception { JavaRDD<String> rdd = pairRdd.map(new Function<Tuple2<String, String>, String>() { @Override public String call(Tuple2<String, String> arg0) throws Exception { return arg0._2; } }); writeOffset(rdd, ((HasOffsetRanges) rdd.rdd()).offsetRanges()); return rdd; } }); } else { for (EventLog eventLog : eventLogList) { kafkaTopicPartition.put(new TopicAndPartition(topicName, Integer.parseInt(eventLog.getPartition())), Long.parseLong(eventLog.getUntilOffset())); } kafkaOutStream = KafkaUtils.createDirectStream(jssc, String.class, String.class, StringDecoder.class, StringDecoder.class, String.class, kafkaParams, kafkaTopicPartition, new Function<MessageAndMetadata<String, String>, String>() { @Override public String call(MessageAndMetadata<String, String> arg0) throws Exception { return arg0.message(); } }).transform(new Function<JavaRDD<String>, JavaRDD<String>>() { @Override public JavaRDD<String> call(JavaRDD<String> rdd) throws Exception { writeOffset(rdd, ((HasOffsetRanges) rdd.rdd()).offsetRanges()); return rdd; } }); } // Use kafkaOutStream for further processing. jssc.start(); } private static void writeOffset(JavaRDD<String> rdd, final OffsetRange[] offsets) { for (OffsetRange offsetRange : offsets) { EventLog eventLog = new EventLog(); eventLog.setTopicName(String.valueOf(offsetRange.topic())); eventLog.setPartition(String.valueOf(offsetRange.partition())); eventLog.setFromOffset(String.valueOf(offsetRange.fromOffset())); eventLog.setUntilOffset(String.valueOf(offsetRange.untilOffset())); javaFunctions(rdd).writerBuilder("test", "event_log", null).saveToCassandra(); } } } Hope this helps and resolve your problem...
Cassandra
39,167,776
11
I get bulk write request for let say some 20 keys from client. I can either write them to C* in one batch or write them individually in async way and wait on future to get them completed. Writing in batch does not seem to be a goo option as per documentation as my insertion rate will be high and if keys belong to different partitions co-ordinators will have to do extra work. Is there a way in datastax java driver with which I can group keys which could belong to same partition and then club them into small batches and then do invidual unlogged batch write in async. IN that way i make less rpc calls to server at the same time coordinator will have to write locally. I will be using token aware policy.
Your idea is right, but there is no built-in way, you usually do that manually. Main rule here is to use TokenAwarePolicy, so some coordination would happen on driver side. Then, you could group your requests by equality of partition key, that would probably be enough, depending on your workload. What I mean by 'grouping by equality of partition key` is e.g. you have some data that looks like MyData { partitioningKey, clusteringKey, otherValue, andAnotherOne } Then when inserting several such objects, you group them by MyData.partitioningKey. It is, for all existsing paritioningKey values, you take all objects with same partitioningKey, and wrap them in BatchStatement. Now you have several BatchStatements, so just execute them. If you wish to go further and mimic cassandra hashing, then you should look at cluster metadata via getMetadata method in com.datastax.driver.core.Cluster class, there is method getTokenRanges and compare them to result of Murmur3Partitioner.getToken or any other partitioner you configured in cassandra.yaml. I've never tried that myself though. So, I would recommend to implement first approach, and then benchmark your application. I'm using that approach myself, and on my workload it works far better than without batches, let alone batches without grouping.
Cassandra
38,931,909
11
I have read over several documents regarding the Cassandra commit log and, to me, there is conflicting information regarding this "structure(s)". The diagram shows that when a write occurs, Cassandra writes to the memtable and commit log. The confusing part is where this commit log resides. The diagram that I've seen over-and-over shows the commit log on disk. However, if you do some more reading, they also talk about a commit log buffer in memory - and that piece of memory is flushed to disk every 10 seconds. DataStax Documentation states: "When a write occurs, Cassandra stores the data in a memory structure called memtable, and to provide configurable durability, it also appends writes to the commit log buffer in memory. This buffer is flushed to disk every 10 seconds". Nowhere in their diagram do they show a memory structure called a commit log buffer. They only show the commit log residing on disk. It also states: "When a write occurs, Cassandra stores the data in a structure in memory, the memtable, and also appends writes to the commit log on disk." So I'm confused by the above. Is it written to the commit log memory buffer, which is eventually flushed to disk (which I would assume is also called the "commit log"), or is it written to the memtable and commit log on disk? Apache's documentation states this: "Instead, like other modern systems, Cassandra provides durability by appending writes to a commitlog first. This means that only the commitlog needs to be fsync'd, which, if the commitlog is on its own volume, obviates the need for seeking since the commitlog is append-only. Implementation details are in ArchitectureCommitLog. Cassandra's default configuration sets the commitlog_sync mode to periodic, causing the commitlog to be synced every commitlog_sync_period_in_ms milliseconds, so you can potentially lose up to that much data if all replicas crash within that window of time." What I have inferred from the Apache statement is that ONLY because of the asynchronous nature of writes (acknowledgement of a cache write) could you lose data (it even states you can lose data if all replicas crash before it is flushed/sync'd). I'm not sure what I can infer from the DataStax documentation and diagram as they've mentioned two different statements regarding the commit log - one in memory, one on disk. Can anyone clarify, what I consider, a poorly worded and conflicting set of documentation? I'll assume there is a commit log buffer, as they both reference it (yet DataStax doesn't show it in the diagram). How and when this is managed, I think, is a key to understand.
Generally when explaining the write path, the commit log is characterized as a file - and it's true the commit log is the on-disk storage mechanism that provides durability. The confusion is introduced when going deeper and the part about buffer cache and having to issue fsyncs is introduced. The reference to "commit log buffer in memory" is talking about OS buffer cache, not a memory structure in Cassandra. You can see in the code that there's not a separate in-memory structure for the commit log, but rather the mutation is serialized and written to a file-backed buffer. Cassandra comes with two strategies for managing fsync on the commit log. commitlog_sync (Default: periodic) The method that Cassandra uses to acknowledge writes in milliseconds: periodic: (Default: 10000 milliseconds [10 seconds]) Used with commitlog_sync_period_in_ms to control how often the commit log is synchronized to disk. Periodic syncs are acknowledged immediately. batch: (Default: disabled)note Used with commitlog_sync_batch_window_in_ms (Default: 2 ms) to control how long Cassandra waits for other writes before performing a sync. When using this method, writes are not acknowledged until fsynced to disk. The periodic offers better performance at the cost of a small increase in the chance that data can be lost. The batch setting guarantees durability at the cost of latency.
Cassandra
38,506,734
11
After posting a question and reading this and that articles, I still do not understand the relations between those three operations- Cassandra compaction tasks nodetool repair nodetool cleanup Is repair task can be processed while compaction task is running, or cleanup while compaction task is running? Is cleanup is a operation that need to be executed weekly as repair? Why repair operation need to be executed manually and it is not in Cassandra default behavior? What is the ground rules for healthy cluster maintenance?
A cleanup is a compaction that just removes things outside the nodes token range(s). A repair has a "Validation Compaction" to build a merkle tree to compare with the other nodes, so part of nodetool repair will have a compaction. Is repair task can be processed while compaction task is running, or cleanup while compaction task is running? There is a shared pool of for the compactions across normal compactions, repairs, cleanups, scrubs etc. This is the concurrent_compactors setting in the cassandra.yaml that defaults to a combination of the number of cores and data directories: https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/config/DatabaseDescriptor.java#L572 Is cleanup is a operation that need to be executed weekly as repair? no, only after topology changes really. Why repair operation need to be executed manually and it is not in Cassandra default behavior? Its manual because its requirements can differ a lot on what your data and gc_grace requirements are. https://issues.apache.org/jira/browse/CASSANDRA-10070 is bringing it inside Cassandra though so in the future it will be automatic. What is the ground rules for healthy cluster maintenance? I would (opinion) say: Regular backups (depending on requirements, and acceptable data loss this can be anything from weekly/daily to constantly with incremental). This is just as much for "internal" mistakes ("Opps i deleted a customer") as outages. Even with strong multi-dc replication you want some minimum backups. Making sure a Repair completes for all tables that have deletes at least once within the gc_grace time of those tables. Metric and log storage pretty important if you want to be able to debug issues.
Cassandra
37,684,547
11
I'm on my path to learning Cassandra, and the differences in CQL and SQL, but I'm noticing the absence of a way to check to see if a record exists with Cassandra. Currently, the best way that I have is to use SELECT primary_keys FROM TABLE WHERE primary_keys = blah, and checking to see if the results set is empty. Is there a better way to do this, or do I have the right idea for now?
Using count will make it traverse all the matching rows just to be able to count them. But you only need to check one, so just limit and return whatever. Then interpret the presence of the result as true and absence - as false. E.g., SELECT primary_keys FROM TABLE WHERE primary_keys = blah LIMIT 1
Cassandra
34,455,098
11
I have a problem when i use spark streaming to read from Cassandra. https://github.com/datastax/spark-cassandra-connector/blob/master/doc/8_streaming.md#reading-from-cassandra-from-the-streamingcontext As the link above, i use val rdd = ssc.cassandraTable("streaming_test", "key_value").select("key", "value").where("fu = ?", 3) to select the data from cassandra, but it seems that the spark streaming has just one query once but i want it continues to query using an interval 10 senconds. My code is as follow, wish for your response. Thanks! import org.apache.spark._ import org.apache.spark.streaming._ import com.datastax.spark.connector.streaming._ import org.apache.spark.rdd._ import scala.collection.mutable.Queue object SimpleApp { def main(args: Array[String]){ val conf = new SparkConf().setAppName("scala_streaming_test").set("spark.cassandra.connection.host", "127.0.0.1") val ssc = new StreamingContext(conf, Seconds(10)) val rdd = ssc.cassandraTable("mykeyspace", "users").select("fname", "lname").where("lname = ?", "yu") //rdd.collect().foreach(println) val rddQueue = new Queue[RDD[com.datastax.spark.connector.CassandraRow]]() val dstream = ssc.queueStream(rddQueue) dstream.print() ssc.start() rdd.collect().foreach(println) rddQueue += rdd ssc.awaitTermination() } }
You can create a ConstantInputDStream with the CassandraRDD as input. ConstantInputDStream will provide the same RDD on each streaming interval, and by executing an action on that RDD you will trigger a materialization of the RDD lineage, leading to executing the query on Cassandra every time. Make sure that the data being queried does not grow unbounded to avoid increasing query times and resulting in an unstable streaming process. Something like this should do the trick (using your code as starting point): import org.apache.spark.streaming.dstream.ConstantInputDStream val ssc = new StreamingContext(conf, Seconds(10)) val cassandraRDD = ssc.cassandraTable("mykeyspace", "users").select("fname", "lname").where("lname = ?", "yu") val dstream = new ConstantInputDStream(ssc, cassandraRDD) dstream.foreachRDD{ rdd => // any action will trigger the underlying cassandra query, using collect to have a simple output println(rdd.collect.mkString("\n")) } ssc.start() ssc.awaitTermination()
Cassandra
32,451,614
11
I am performing a cql query on a column that stores the values as unix timestmap, but want the results to output as datetime. Is there a way to do this? i.e. something like the following: select convertToDateTime(column) from table;
I'm trying to remember if there's an easier, more direct route. But if you have a table with a UNIX timestamp and want to show it in a datetime format, you can combine the dateOf and min/maxTimeuuid functions together, like this: aploetz@cqlsh:stackoverflow2> SELECT datetime,unixtime,dateof(mintimeuuid(unixtime)) FROM unixtime; datetimetext | unixtime | dateof(mintimeuuid(unixtime)) ----------------+---------------+------------------------------- 2015-07-08 | 1436380283051 | 2015-07-08 13:31:23-0500 (1 rows) aploetz@cqlsh:stackoverflow2> SELECT datetime,unixtime,dateof(maxtimeuuid(unixtime)) FROM unixtime; datetimetext | unixtime | dateof(maxtimeuuid(unixtime)) ----------------+---------------+------------------------------- 2015-07-08 | 1436380283051 | 2015-07-08 13:31:23-0500 (1 rows) Note that timeuuid stores greater precision than either a UNIX timestamp or a datetime, so you'll need to first convert it to a TimeUUID using either the min or maxtimeuuid function. Then you'll be able to use dateof to convert it to a datetime timestamp.
Cassandra
31,300,682
11
I am trying to find the total physical size occupied by cassandra keyspace. I have a msg generator which dumps lot of messages to cassandra . I want to find out the total physical size of messages in cassandra Table. When I do du -h /mnt/data/keyspace linux says only 12kb. I am sure that the data size is much greater than that. The rest of the data must either be in memtables or should be in compaction. How do I find the total space occupied in cassandra for that keyspace? I tried the nodetool cfstats <keyspace> But it gives me only for that particular node. And also the bytes are present in memtable . I actually want the total size of keyspaces that are actually written to disk across all nodes in the cluster . Is there any command to find this ? Thanks for the help.
What is Compaction? SStables are immutable -- once a memtable is flushed to disk, it remains unchanced until it is deleted (expired) or compacted. Compaction is the process of combining sstables together. This is important when your workload is update heavy and you may have several instances of a CQL row stored in your SSTables (see sstables per read in nodetool cfhistograms). When you go to read that row, you may have to scan across multiple sstables to find the latest version of the data (in c* last write wins). When we compact, we may take up additional space on disk (especially size tiered compaction which may take up to--this is a theoretical maximum--50% of your data size when compacting) so it is important to keep free disk space. However, compaction will not take data away from your keyspace directory. This is not where your data is. Then where did my data go? You're right in your suspicion that data that has not yet been flushed to disk must be sitting in memtables. This data will make it to disk as soon as your commitlog fills up (default 1gb in 2.0 or 8gb in 2.1) or as soon as your memtables get too big -- memtable_total_space_in_mb. If you want to see your data in sstables, you can flush it manually: nodetool flush and your memtables will be dropped into your KS directory in the form of SSTables. Or just be patient and wait until you hit either the commitlog or memtable thresholds. But aren't cassandra writes durable? Yes, your memtable data is also stored in the commitlog. If your machine looses power, etc, the data that has been written is still persisted to disk and the commit-log data will get replayed on startup!
Cassandra
29,915,307
11
Is there any limit on maximum length we can specify for a column while creating Cassandra table, if yes, then how much we can specify? I am new to using Cassandra, please let me know
The maximum number of cells (rows x columns) in a single partition is 2 billion and the maximum column key (and row key) size is 64KB and the maximum column value size is 2 GB. you can refer this https://cwiki.apache.org/confluence/display/CASSANDRA2/CassandraLimitations
Cassandra
27,968,287
11
Created a table in Cassandra where the primary key is based on two columns(groupname,type). When I'm trying to insert more than 1 row where the groupname and type is same, then in such situation its not storing more than one row, subsequent writes where in the groupname and type are same.. then the latest write is replacing the previous similar writes. Why Cassandra is replacing in this manner instead of writing every row im inserting? Write 1 cqlsh:resto> insert into restmaster (rest_id,type,rname,groupname,address,city,country)values(blobAsUuid(timeuuidAsBlob(now())),'SportsBar','SportsDen','VK Group','Majestic','Bangalore','India'); Write 2 insert into restmaster (rest_id,type,rname,groupname,address,city,country)values(blobAsUuid(timeuuidAsBlob(now())),'SportsBar','Sports Spot','VK Group','Bandra','Mumbai','India'); Write 3 cqlsh:resto> insert into restmaster (rest_id,type,rname,groupname,address,city,country)values(blobAsUuid(timeuuidAsBlob(now())),'SportsBar','Cricket Heaven ','VK Group','Connaught Place','New Delhi','India'); The result Im expecting(check rows 4,5,6) groupname | type | rname ----------------+------------+----------------- none | Udipi | Gayatri Bhavan none | dinein | Blue Diamond VK Group | FoodCourt | FoodLion VK Group | SportsBar | Sports Den VK Group | SportsBar | Sports Spot VK Group | SportsBar | Cricket Heaven Viceroy Group | Vegetarian | Palace Heights Mainland Group | Chinese | MainLand China JSP Group | FoodCourt | Nautanki Ohris | FoodCourt | Ohris But this is the actual result (write 3 has replaced previous 2 inserts [rows 4,5]) cqlsh:resto> select groupname,type,rname From restmaster; groupname | type | rname ----------------+------------+----------------- none | Udipi | Gayatri Bhavan none | dinein | Blue Diamond VK Group | FoodCourt | FoodLion VK Group | SportsBar | Cricket Heaven Viceroy Group | Vegetarian | Palace Heights Mainland Group | Chinese | MainLand China JSP Group | FoodCourt | Nautanki Ohris | FoodCourt | Ohris cqlsh:resto> describe table restmaster; CREATE TABLE restmaster ( groupname text, type text, address text, city text, country text, rest_id uuid, rname text, PRIMARY KEY ((groupname), type) ) WITH bloom_filter_fp_chance=0.010000 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.100000 AND gc_grace_seconds=864000 AND index_interval=128 AND read_repair_chance=0.000000 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'LZ4Compressor'};
All inserts to the Cassandra database are actually insert/update operations and there can only be on set of non-key values per uniquely defined primary key. This means that you can not ever have more than one set of values for one primary key and that you will only see the last write. More info: http://www.datastax.com/documentation/cql/3.1/cql/cql_intro_c.html Update: A datamodel If you used a key like Primary Key ((groupname),type,rname) As long as you have unique restaurant names you will be able to get the results you are expecting. But what you really should be asking is "What queries would I like to perform on this data?" All Cassandra Tables should be based around satisfying a class of queries. The key I wrote above basically says "This table is constructed to quickly look up all the restaurants in a particular group and the only conditionals I will use will be on type and on restaurant name" Examples queries you could perform with that schema SELECT * FROM restmaster WHERE groupname = 'Lettuce Entertain You' ; SELECT * FROM restmaster WHERE groupname = 'Lettuce Entertain You' and type = 'Formal' ; SELECT * FROM restmaster WHERE groupname = 'Lettuce Entertain You' and type = 'Formal' and rname > 'C' and rname < 'Y' ; If that isn't the kind of queries you want to be performing in your application or you want other queries in addition to those, you most likely will need additional tables.
Cassandra
25,817,451
11
Disclaimer: This is quite a long post. I first explain the data I am dealing with, and what I want to do with it. Then I detail three possible solutions I have considered, because I've tried to do my homework (I swear :]). I end up with a "best guess" which is a variation of the first solution. My ultimate question is: what's the most sensible way to solve my problem using Cassandra? Is it one of my attempts, or is it something else? I am looking for advice/feedback from experienced Cassandra users... My data: I have many SuperDocuments that own Documents in a tree structure (headings, subheadings, sections, …). Each SuperDocument structure can change (renaming of headings mostly) over time, thus giving me multiple versions of the structure as shown below. What I'm looking for: For each SuperDocument I need to timestamp those structures by date as above and I'd like, for a given date, to find the closest earlier version of the SuperDocument structure. (ie. the most recent version for which version_date < given_date) These considerations might help solving the problem more easily: Versions are immutable: changes are rare enough, I can create a new representation of the whole structure each time it changes. I do not need to access a subtree of the structure. I'd say it is OK to say that I do not need to find all the ancestors of a given leaf, nor do I need to access a specific node/leaf inside the tree. I can work all of this out in my client code once I have the whole tree. OK let's do it Please keep in mind I am really just starting using Cassandra. I've read/watched a lot of resources about data modeling, but haven't got much (any!) experience in the field! Which also means everything will be written in CQL3... sorry Thrift lovers! My first attempt at solving this was to create the following table: CREATE TABLE IF NOT EXISTS superdoc_structures ( doc_id varchar, version_date timestamp, pre_pos int, post_pos int, title text, PRIMARY KEY ((doc_id, version_date), pre_pos, post_pos) ) WITH CLUSTERING ORDER BY (pre_pos ASC); That would give me the following structure: I'm using a Nested Sets model for my trees here; I figured it would work well to keep the structure ordered, but I am open to other suggestions. I like this solution: each version has its own row, in which each column represents a level of the hierarchy. The problem though is that I (candidly) intended to query my data as follows: SELECT * FROM superdoc_structures WHERE doc_id="3399c35...14e1" AND version_date < '2014-03-11' LIMIT 1 Cassandra quickly reminded me I was not allowed to do that! (because the partitioner does not preserve row order on the cluster nodes, so it is not possible to scan through partition keys) What then...? Well, because Cassandra won't let me use inequalities on partition keys, so be it! I'll make version_date a clustering key and all my problems will be gone. Yeah, not really... First try: CREATE TABLE IF NOT EXISTS superdoc_structures ( doc_id varchar, version_date timestamp, pre_pos int, post_pos int, title text, PRIMARY KEY (doc_id, version_date, pre_pos, post_pos) ) WITH CLUSTERING ORDER BY (version_date DESC, pre_pos ASC); I find this one less elegant: all versions and structure levels are made into columns of a now very wide row (compared to my previous solution): Problem: with the same request, using LIMIT 1 will only return the first heading. And using no LIMIT would return all versions structure levels, which I would have to filter to only keep the most recent ones. Second try: there's no second try yet... I have an idea though, but I feel it's not using Cassandra wisely. The idea would be to cluster by version_date only, and somehow store whole hierarchies in each column values. Sounds bad doesn't it? I would do something like this: CREATE TABLE IF NOT EXISTS superdoc_structures ( doc_id varchar, version_date timestamp, nested_sets map<int, int>, titles list<text>, PRIMARY KEY (doc_id, version_date) ) WITH CLUSTERING ORDER BY (version_date DESC); The resulting row structure would then be: It looks kind of all right to me in fact, but I will probably have more data than the level title to de-normalize into my columns. If it's only two attributes, I could go with another map (associating titles with ids for instance), but more data would lead to more lists, and I have the feeling it would quickly become an anti-pattern. Plus, I'd have to merge all lists together in my client app when the data comes in! ALTERNATIVE & BEST GUESS After giving it some more thought, there's an "hybrid" solution that might work and may be efficient and elegant: I could use another table that would list only the version dates of a SuperDocument & cache these dates into a Memcache instance (or Redis or whatever) for real quick access. That would allow me to quickly find the version I need to fetch, and then request it using the composite key of my first solution. That's two queries, plus a memory cache store to manage. But I may end up with one anyway, so maybe that'd be the best compromise? Maybe I don't even need a cache store? All in all, I really feel the first solution is the most elegant one to model my data. What about you?!
First, you don't need to use memcache or redis. Cassandra will give you very fast access to that information. You could certainly have a table that was something like: create table superdoc_structures { doc_id varchar; version_date timestamp; /* stuff */ primary key (doc_id, version_date) } with clustering order by (version_date desc); which would give you a quick way to access a given version (this query may look familiar ;-): select * from superdoc_structures where doc_id="3399c35...14e1" and version_date < '2014-03-11' order by version_date desc limit 1; Since nothing about the document tree structure seems to be relevant from the schema's point of view, and you are happy as a clam to create the document in its entirety every time there is a new version, I don't see why you'd even bother breaking out the tree in to separate rows. Why not just have the entire document in the table as a text or blob field? create table superdoc_structures { doc_id varchar; version_date timestamp; contents text; primary key (doc_id, version_date) } with clustering order by (version_date desc); So to get the contents of the document as existed at the new year, you'd do: select contents from superdoc_structures where doc_id="...." and version_date < '2014-01-1' order by version_date > 1 Now, if you did want to maintain some kind of hierarchy of the document components, I'd recommend doing something like a closure table table to represent it. Alternatively, since you are willing to copy the entire document on each write anyway, why not copy the entire section info on each write, why not do so and have a schema like: create table superdoc_structures { doc_id varchar; version_date timestamp; section_path varchar; contents text; primary key (doc_id, version_date, section_path) ) with clustering order by (version_date desc, section_path asc); Then have section path have a syntax like, "first_level next_level sub_level leaf_name". As a side benefit, when you have the version_date of the document (or if you create a secondary index on section_path), because a space is lexically "lower" than any other valid character, you can actually grab a subsection very cleanly: select section_path, contents from superdoc_structures where doc_id = '....' and version_date = '2013-12-22' and section_path >= 'chapter4 subsection2' and section_path < 'chapter4 subsection2!'; Alternatively, you can store the sections using Cassandra's support for collections, but again... I'm not sure why you'd even bother breaking them out as doing them as one big chunk works just great.
Cassandra
25,449,640
11
Im a Cassandra newbie. I understand the purpose of the seed node. But are there any costs associated with a seed node? If so, what are they. Else, I wondering why just not make every node a seed node?
There are essentially no local runtime costs associated with being a seed, other than you may receive more gossip traffic than a non-seed node. However with increasing number of seeds, this local effect will be progressively less pronounced. More interesting are distributed effects. Seed nodes are favored for gossiping, which means that if there are only a few of them, updates will be concentrated among those few seeds. Non-seed nodes will try to send gossip updates to the seeds (picking randomly from their seed list), and so if everyone sends updates to the same few nodes, they are bound to have the most recent cluster metadata. At the same time, gossiping also involves receiving metadata from the seeds, which means that everyone who gossips with the few seed nodes will also benefit from the most recent updates. The end result is that updates are disseminated relatively quickly throughout the cluster, at the cost of concentrating some of the gossip traffic on fewer nodes. Compare that to a situation where every node is a seed. When some node gossips, it essentially talks to another random node in the cluster, which is not any likelier to gossip with the rest of the cluster. So the update that our first node just sent to the "seed" is not going to propagate around especially quickly. Furthermore, because the seed does not receive a larger proportion of all gossip updates, the info it is able to send back to our node is not particularly up-to-date either (in fact both nodes would have approximately the same probability of not knowing about some disconnected update in the cluster). So we get full decentralization, but with completely random update propagation. In real terms, if you have a large number of seeds, you may be subject to flapping, ghosting and other strange behaviors related to old topology info persisting for longer than it should.
Cassandra
22,569,260
11
When importing datastax cassandra-driver(python) get the following error Error File "cassandra.py", line 1, in <module> from cassandra.cluster import Cluster File "/home/vagrant/cassandra.py", line 1, in <module> from cassandra.cluster import Cluster ImportError: No module named cluster This is the code from cassandra.cluster import Cluster print dir(cassandra.cluster) cluster = Cluster() session = cluster.connect('userspace') rows = session.execute('SELECT user_name, gender FROM users') for user_row in rows: print user_row.user_name, user_row.gender
Well, it looks like names conflict. Your script is cassandra.py and so it is the module for the driver. Try to rename your script (and don't forget to delete its .pyc) and try again
Cassandra
22,449,688
11
First of all I am relatively new to Big Data and the Hadoop world and I have just started to experiment a little with the Hortonworks Sandbox (Pig and Hive so far). I was wondering in which cases could I use the above mentioned tools of Hadoop, Hive, Pig, HBase and Cassandra? In my sandbox environment with a file of just 9MB Hive and Pig had response times of seconds to minutes. This is obviously not usable in some situations for example web applications (unless it is something else such as my virtual machine setup). My guesses about the correct usages are: Hadoop: Just the technological base for the rest, only very few use-cases where it would be used directly Hive or Pig: For analytical processes that run once per hour or day HBase or Cassandra: for real-time applications (e.g. web applications) where response times with 100ms or less are required Additionally, when to use HBase as opposed to when to use Cassandra? Thanks!
Your guesses are somewhat accurate. By Hadoop, I guess you are referring to MapReduce? Hadoop as such is an ecosystem which consists of many components (including MapReduce, HDFS, Pig and Hive). MapReduce is good when you need to write the logic for processing data at the Map() and Reduce() method level. In my work, I find MapReduce very useful when I'm dealing with data that is unstructured & needs to be cleansed. Hive,Pig: They are good for batch processes, running periodically (maybe in terms of hours or days) HBase & Cassandra: Support low latency calls. So they can be used for real time applications, where response time is key. Have a look at this discussion to get a better idea about HBase vs Cassandra.
Cassandra
21,439,029
11
I run nodetool status on a Cassandra 2.0.3 instance, the output shows: Datacenter: eu-west =================== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN xx.xxx.xxx.xxx 154.81 KB 256 33.8% bcfb1057-60d7-4d94-b277-a1e8c0f55cf4 1a I looked for all manuals available online, but none of them explains what UN means. What does the UN mean?
U=Up N=Normal It's the first letter of Status followed by the first letter of State.
Cassandra
20,678,366
11
I'm currently using and researching about data modeling practices in cassandra. So far, I get that you need have a data modeling based on the queries executed. However, multiple select requirements make data modeling even harder or impossible to handle it on 1 table. So, when you can't handle these requirements on 1 table, you need to insert 2-3 tables. In other words, you need to make multiple inserts on 1 operation. Currently, I'm dealing with a data model of a campaign structure. I have a campaign table on cassandra with the following cql; CREATE TABLE campaign_users ( created_at timeuuid, campaign_id int, uid bigint, updated_at timestamp, PRIMARY KEY (campaign_id, uid), INDEX(campaign_id, created_at) ); In this model, I need to be able to make incremental exports given a timestamp only. In cassandra, there is allow filtering mode that enables select queries for secondary indexes. So, my cql statement for incremental export is the following; select campaign_id, uid from campaign_users where created_at > minTimeuuid('2013-08-14 12:26:06+0000') allow filtering; However, if allow filtering is used, there is a warning saying that the statement have unpredictable performance. So, is it a good practice relying on allow filtering ? What can be other alternatives ?
The ALLOW FILTERING warning is because Cassandra is internally skipping over data, rather than using an index and seeking. This is unpredictable because you don't know how much data Cassandra is going to skip over per row returned. You could be scanning through all your data to return zero rows, in the worst case. This is in contrast to operations without ALLOW FILTERING (apart from SELECT COUNT queries), where the data read through scales linearly with the amount of data returned. This is OK if you're returning most of the data, so the data skipped over doesn't cost very much. But if you were skipping over most of your data a lot of work will be wasted. The alternative is to include time in the first component of your primary key, in buckets. E.g. you could have day buckets and duplicate your queries for each day that contains data you need. This method guarantees that most of the data Cassandra reads over is data that you want. The problem is that all data for the bucket (e.g. day) needs to fit in one partition. You can fix this by sharding the partition somehow e.g. include some aspect of the uid within it.
Cassandra
18,694,413
11
I need to fetch rows without specific keys. for sample: select * from users where user_id not in ("mikko"); I have tried with "not in" and this is the response: Bad Request: line 1:35 no viable alternative at input 'not'
"not in" is not a supported operation in CQL. Cassandra at its heart is still based on key indexed rows. So that query is basically the same as "select * from users", as you have to go through every row and figure out if it does not match the in. If you want to do that type of query you will want to setup a map reduce job to perform it. When using Cassandra what you actually want to do is de-normalize your data model so that the queries you application performs end up querying a single partition (or just a few partitions) for their results. Also find some great webinars and talks on Cassandra data modeling http://www.youtube.com/watch?v=T_WRC_GjRd0&feature=youtu.be http://youtu.be/x4Q9JeLIyNo http://www.youtube.com/watch?v=HdJlsOZVGwM&list=PLqcm6qE9lgKJzVvwHprow9h7KMpb5hcUU&index=10
Cassandra
18,498,445
11
I am using Cassandra 1.2.5. After creating a column family in Cassandra using cassandra-cli, is it possible to modify the primary key on the column family using either cassandra-cli or CQL? Specifically, I currently have the following table (from CQL): CREATE TABLE "table1" ( key blob, column1 blob, value blob, PRIMARY KEY (key, column1) ); I would like the table to be as follows, without having to drop and recreate the table: CREATE TABLE "table1" ( key blob, column1 blob, value blob, PRIMARY KEY (key) ); Is this possible through either cassandra-cli or CQL?
The primary keys directly determine how and where cassandra stores the data contained in a table (column family). The primary key consists of partition key and clustering key (optional). The partition key determines which node stores the data. It is responsible for data distribution across the nodes. The additional columns determine per-partition clustering (see compound key documentation). So changing the primary key will always require all data to be migrated. I do not think that either cqlsh or cassandra-cli have a command for this (as of 2015)..
Cassandra
18,421,668
11
CQL 3 allows for a "compound" primary key using a definition like this: CREATE TABLE timeline ( user_id varchar, tweet_id uuid, author varchar, body varchar, PRIMARY KEY (user_id, tweet_id) ); With a schema like this, the partition key (storage engine row key) will consist of the user_id value, while the tweet_id will be compounded into the column name. What I am looking for, instead, is for the partition key (storage engine row key) to have a composite value like user_id:tweet_id. Obviously I could do something like key = user_id + ':' + tweet_id in my application, but is there any way to have CQL 3 do this for me?
Actually, yes you can. That functionality was added in this ticket: https://issues.apache.org/jira/browse/CASSANDRA-4179 The format for you would be: CREATE TABLE timeline ( user_id varchar, tweet_id uuid, author varchar, body varchar, PRIMARY KEY ((user_id, tweet_id)) );
Cassandra
13,938,288
11
Is it possible in Cassandra to use multiple conditions union ed together after the where clause in a select statement like in any of the RDBMS. Here is my code : SELECT * from TABLE_NAME WHERE COND1= 'something' OR COND2 = 'something';
Assuming COND is the name of your table's primary key, you can do: SELECT * from TABLE_NAME WHERE COND1 in ('something', 'something'); So, there is no fully general OR operation, but it looks like this may be equivalent to what you were trying to do. Remember, as you use CQL, that query planning is not meant to be one of its strengths. Cassandra code typically makes the assumption that you have huge amounts of data, so it will try to avoid doing any queries that might end up being expensive. In the RMDBS world, you structure your data according to intrinsic relationships (3rd normal form, etc), whereas in Cassandra, you structure your data according to the queries you expect to need. Denormalization is (forgive the pun) the norm. Point is, CQL is intended to be a more familiar, friendly syntax for making Cassandra queries than the older Thrift RPC interface. It's not intended to be completely expressive and flexible the way that SQL is.
Cassandra
10,139,390
11
I have the cassandra cluster of 12 nodes on EC2. Because of some failure we lost one of the node completely.I mean that machine do not exist anymore. So i have created the new EC2 instance with different ip and same token as that of the dead node and i also had the backup of data on that node so it works fine But the problem is the dead nodes ip still appears as a unreachable node in describe cluster. As that node (EC2 instance) does not exist anymore I can not use the nodetool decommission or nodetool disablegossip How can i get rid of this unreachable node
I had the same problem and I resolved it with removenode, which does not require you to find and change the node token. First, get the node UUID: nodetool status DN 192.168.56.201 ? 256 13.1% 4fa4d101-d8d2-4de6-9ad7-a487e165c4ac r1 DN 192.168.56.202 ? 256 12.6% e11d219a-0b65-461e-babc-6485343568f8 r1 UN 192.168.2.91 156.04 KB 256 12.4% e1a33ed4-d613-47a6-8b3b-325650a2bbd4 RAC1 UN 192.168.2.92 156.22 KB 256 13.6% 3a4a086c-36a6-4d69-8b61-864ff37d03c9 RAC1 UN 192.168.2.93 149.6 KB 256 11.3% 20decc72-8d0a-4c3b-8804-cc8bc98fa9e8 RAC1 As you can see the .201 and .202 are dead and on a different network. These have been changed to .91 and .92 without proper decommissioning and recommissioning. I was working on installing the network and made a few mistakes... Second, remove the .201 with the following command: nodetool removenode 4fa4d101-d8d2-4de6-9ad7-a487e165c4ac (in older versions it was nodetool remove ...) But just like for the nodetool removetoken ..., it blocks... (see comment by samarth in psandord answer) However, it has a side effect, it puts that UUID in a list of nodes to be removed. So next we can force the removal with: nodetool removenode force (in older versions it was nodetool remove ...) Now the node accepts the command it tells me that it is removing the invalid entry: RemovalStatus: Removing token (-9136982325337481102). Waiting for replication confirmation from [/192.168.2.91,/192.168.2.92]. We also see that it communicates with the two other nodes that are up and thus it takes a little time, but it is still quite fast. Next a nodetool status does not show the .201 node. I repeat with .202 and now the status is clean. After that you may also want to run a cleanup as mentioned in psanford answer: nodetool cleanup The cleanup should be run on all nodes, one by one, to make sure the change is fully taken in account.
Cassandra
8,589,938
11
I want to export all data from a keyspace in a cassandra cluster and import it into another cluster, that has the same schema but the keyspace is differently named. I've looked into the sstable2json /json2sstable utility. However, I don't want to go to each node and deal with each individual sstable.
Simpler: take a snapshot on each node, then use the bulk loader to stream them into the new cluster.
Cassandra
7,650,254
11
Read-your-own-writes consistency is great improvement from the so called eventual consistency: if I change my profile picture I don't care if others see the change a minute later, but it looks weird if after a page reload I still see the old one. Can this be achieved in Cassandra without having to do a full read-check on more than one node? Using ConsistencyLevel.QUORUM is fine while reading an unspecified data and n>1 nodes are actually being read. However when client reads from the same node as he writes in (and actually using the same connection) it can be wasteful - some databases will in this case always ensure that the previously written (my) data are returned, and not some older one. Using ConsistencyLevel.ONE does not ensure this and assuming it leads to race conditions. Some test showed this: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/per-connection-quot-read-after-my-write-quot-consistency-td6018377.html My hypothetical setup for this scenario is 2 nodes, replication factor 2, read level 1, write level 1. This leads to eventual consistency, but I want read-your-own-writes consistency on reads. Using 3 nodes, RF=3, RL=quorum and WL=quorum in my opinion leads to wasteful read request if I being consistent only on "my" data is enough. // seo: also known as: session consistency, read-after-my-write consistency
Good question. We've had http://issues.apache.org/jira/browse/CASSANDRA-876 open for a while to add this, but nobody's bothered finishing it because CL.ONE is just fine for a LOT of workloads without any extra gymnastics Reads are so fast anyway that doing the extra one is not a big deal (and in fact Read Repair, which is on by default, means all the nodes get checked anyway, so the difference between CL.ONE and higher is really more about availability than performance) That said, if you're motivated to help, ask on the ticket and I'll be happy to point you in the right direction.
Cassandra
6,865,545
11
I'm trying to use the sstableloader to load data into an existing Cassandra ring, but cant figure out how to actually get it to work. I'm trying to run it on a machine that has a running cassandra node on it, but when I run it I get an error saying that port 7000 is already in use, which is the port the running Cassandra node is using for gossip. So does that mean I can only use sstableloader on a machine that is in the same network as the target cassandra ring, but isn't actually running a cassandra node? Any details would be useful, thanks.
Played around with sstableloader, read the source code, and finally figured out how to run sstableloader on the same machine that hosts a running cassandra node. There are two key points to get this running. First you need to create a copy of the cassandra install folder for sstableloader. This is becase sstableloader reads the yaml file to figure out what ipaddress to use for gossip, and the existing yaml file is being used by Cassandra. The second point is that you'll need to create a new loopback ipaddress (something like 127.0.0.2) on your machine. Once this is done, change the yaml file in the copied Cassandra install folder to listen to this ipaddress. I wrote a tutorial going more into detail about how to do this here: http://geekswithblogs.net/johnsPerfBlog/archive/2011/07/26/how-to-use-cassandrs-sstableloader.aspx
Cassandra
6,832,285
11
We are working with a Cassandra database that will store data in the petabyte range. We are thinking of using either ElasticSearch or Solandra, but we are having a fun time deciding between which to use. I'm wondering if the our database might get too large. I know ElasticSearch is scalable, but to what extent - especially with a Cassandra database. Solandra on the other hand is made for Cassandra and is highly scalable, but again, to what extent? Both are scalable, but how scalable using Cassandra?
Solandra is being used in the 10s of Terabytes range. Are you saying you want to index a PB of data in solandra or a subset? I think if you want 1 big index with a PB of data you are stretching the limits. but If you want a PB of indexes, then this will scale the same as Cassandra. How many nodes are you planning to run? how much disk per node?
Cassandra
6,359,712
11
I'm investigating both Cassandra and MongoDB for a new project because they share some good qualities that I will need to take advantage of for this project. I've seen plenty of shallow examples for Cassandra and MongoDB - basically how to configure and start up the database, how to add new items, how to query the items. But what I really need are examples that contain strategies for inserting and querying complex data. Something that would require one-to-many and many-to-many relationships. Actually I did find a Cassanrda/Twitter example that was pretty good here. Where can I find more examples like this? ...especially for MongoDB
Please look at http://www.10gen.com/video There are various postings and videos explaining MongoDB from scratch including examples. Also look at http://www.mongodb.org/display/DOCS/Schema+Design Key remarks: no JOINs use embedded documents otherwise or look at database reference denormalization is kind of common Anything else you need?
Cassandra
5,465,762
11
I'm trying to build small web-system (url shortcutting) using nonsql Cassandra DB, the problem I stack is id auto generation. Did someone already stack with this problem? Thanks. P.S. UUID not works for me, I do need to use ALL numbers from 0 to Long.MAX_VALUE (java). so I do need something that exactly works like sql sequence UPDATED: The reason why I'm not ok with GUID ids is inside of scope of my application. My app has url shortcutting part, and I do need to make url as short as possible. So I follow next approach: I'm taking numbers starting from 0 and convert it base64 string. So in result I have url like mysite.com/QA (where QA is base 64 string). This is was very easy to implement using SQL DB, I just took auto incremented ID, convert it to URL and was 100-percents sure, that URL is unique.
Don't know about Cassandra, but with mongo you can have an atomic sequence (it won't scale, but will work the way it should, even in sharded environment if the query has the sharded field). It can be done by using the findandmodify command. Let's consider we have a special collection named sequences and we want to have a sequence for post numbers (named postid), you could use code similar to this: > db.runCommand( { "findandmodify" : "sequences", "query" : { "name" : "postid"}, "update" : { $inc : { "id" : 1 }}, "new" : true } ); This command will return atomically the updated (new) document together with status. The value field contains the returned document if the command completed successfully.
Cassandra
2,771,399
11
I am in the middle of building a new app which will have very similar features to Facebook and although obviously it wont ever have to deal with the likes of 400,000,000 million users it will still be used by a substantial user base and most of them will demand it run very very quickly. I have extensive experience with MySQL but a social app offers complexities which MySQL is not well suited too. I know Facebook, Twitter etc have moved towards Cassandra for a lot of their data but I am not sure how far to go with it. For example would you store such things as user data - username, passwords, addresses etc in Cassandra? Would you store e-mails, comments, status updates etc in Cassandra? I have also read alot that something like neo4j is much better for representing the friend relationships used by social apps as it is a graph database. I am only just starting down the NoSQL route so any guidance is greatly appreciated. Would anyone be able to advise me on this? I hope I am not being too general!
I would suggest doing some testing with MySQL and with Cassandra. When we had to make a choice between PostgreSQL and MongoDB in one of my jobs, we compared query time on millions of records in both and found out that with about 10M records Postgres would provide us with adequate response times. We knew that we wouldn't get to that number of records for at least a couple of years, and we had experience with Postgres (while MongoDB wasn't very mature at the time), so we went with Postgres. My point is that you can probably look at MySQL benchmarks, do some performance tests yourself, estimate the size of your dataset and how it's going to grow, and make an informed decision that way. As for mixing relational and non-relational databases, it's something we considered as well, but decided that it would be too much of a hassle, as that would mean maintaining two kinds of software, and writing quite a bit of glue code to get the data from both. I think Cassandra would be perfectly capable of storing all your data.
Cassandra
2,581,465
11
I am learning about the Apache Cassandra database [sic]. Does anyone have any good/bad experiences with deploying Cassandra to less than dedicated hardware like the offerings of Linode or Slicehost? I think Cassandra would be a great way to scale a web service easily to meet read/write/request load... just add another Linode running a Cassandra node to the existing cluster. Yes, this implies running the public web service and a Cassandra node on the same VPS (which many can take exception with). Pros of Linode-like deployment for Cassandra: Private VLAN; the Cassandra nodes could communicate privately An API to provision a new Linode (and perhaps configure it with a "StackScript" that installs Cassandra and its dependencies, etc.) The price is right Cons: Each host is a VPS and is not dedicated of course The RAM/cost ratio is not that great once you decide you want 4GB RAM (cf. dedicated at say SoftLayer) Only 1 disk where one would prefer 2 disks I suppose (1 for the commit log and another disk for the data files themselves). Probably moot since this is shared hardware anyway. EDIT: found this which helps a bit: http://wiki.apache.org/cassandra/CassandraHardware I see that 1GB is the minimum but is this a recommendation? Could I deploy with a Linode 720 for instance (say 500 MB usable to Cassandra)? See http://www.linode.com/
How much ram you needs really depends on your workload: if you are write-mostly you can get away with less, otherwise you will want ram for the read cache. You do get more ram for you money at my employer, rackspace cloud: http://www.rackspacecloud.com/cloud_hosting_products/servers/pricing. (our machines also have raided disks so people typically see better i/o performance vs EC2. Dunno about linode.) Since with most VPSes you pay roughly 2x for the next-size instance, i.e., about the same as adding a second small instance, I would recommend going with fewer, larger instances than more, smaller ones, since in small numbers network overhead is not negligible. I do know someone using Cassandra on 256MB VMs but you're definitely in the minority if you go that small.
Cassandra
2,291,442
11
I am using Spring Boot 2.4.4 and Spring Data Cassandra dependency to connect to the Cassandra database. During the application startup, I am getting a DriverTimeout error (I am using VPN). I have gone through all the Stack Overflow questions similar to this and none of them worked for me. I have cross-posted the same question on the Spring Boot official page here. I used below configuration properties below - spring.data.cassandra.contact-points=xxxxxx spring.data.cassandra.username=xxxx spring.data.cassandra.password=xxxxx spring.data.cassandra.keyspace-name=xxxx spring.data.cassandra.port=9042 spring.data.cassandra.schema-action=NONE spring.data.cassandra.local-datacenter=mydc spring.data.cassandra.connection.connect-timeout=PT10S spring.data.cassandra.connection.init-query-timeout=PT20S spring.data.cassandra.request.timeout=PT10S I also added DataStax properties in the application.properties to check if they can be picked up from there or not. datastax-java-driver.basic.request.timeout = 10 seconds datastax-java-driver.advanced.connection.init-query-timeout = 10 seconds datastax-java-driver.advanced.control-connection.timeout = 10 seconds Below is the configuration I used as suggested in the post here - @EnableCassandraRepositories public class CassandraConfig { @Bean DriverConfigLoaderBuilderCustomizer cassandraDriverCustomizer() { return (builder) -> builder.withDuration(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, Duration.ofSeconds(30)); } } But I still get the same error Caused by: com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S I also tried different approached like creating custom CqlSessionFactoryBean and provide all the DataStax properties programmatically to override - @EnableCassandraRepositories public class CassandraConfig extends AbstractCassandraConfiguration { @Bean(name = "session") @Primary public CqlSessionFactoryBean cassandraSession() { CqlSessionFactoryBean factory = new CqlSessionFactoryBean(); factory.setUsername(userName); factory.setPassword(password); factory.setPort(port); factory.setKeyspaceName(keyspaceName); factory.setContactPoints(contactPoints); factory.setLocalDatacenter(dataCenter); factory.setSessionBuilderConfigurer(getSessionBuilderConfigurer()); // my session builder configurer return factory; } // And provided my own SessionBuilder Configurer like below protected SessionBuilderConfigurer getSessionBuilderConfigurer() { return new SessionBuilderConfigurer() { @Override public CqlSessionBuilder configure(CqlSessionBuilder cqlSessionBuilder) { ProgrammaticDriverConfigLoaderBuilder config = DriverConfigLoader.programmaticBuilder() .withDuration(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT, Duration.ofSeconds(30)) .withBoolean(DefaultDriverOption.RECONNECT_ON_INIT, true) .withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofSeconds(30)) .withDuration(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, Duration.ofSeconds(20)); return cqlSessionBuilder.withAuthCredentials(userName, password).withConfigLoader(config.build()); } }; } } It didn't work same error. Also, I excluded the Cassandra auto-configuration classes like suggested here on StackOverflow I also tried to customize custom session builder like below to see if that can work - @Bean public CqlSessionBuilderCustomizer cqlSessionBuilderCustomizer() { return cqlSessionBuilder -> cqlSessionBuilder.withAuthCredentials(userName, password) .withConfigLoader(DriverConfigLoader.programmaticBuilder() .withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofMillis(15000)) .withDuration(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT, Duration.ofSeconds(30)) .withBoolean(DefaultDriverOption.RECONNECT_ON_INIT, true) .withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofSeconds(30)) .withDuration(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, Duration.ofSeconds(20)).build()); } Still no luck. Not only that I also added the application.conf file as DataStax documentation suggested putting that on the classpath, even though that file is getting parsed (after making the syntactical mistake I got to know that it is being read). It didn't work. application.conf- datastax-java-driver { basic.request.timeout = 10 seconds advanced.connection.init-query-timeout = 10 seconds advanced.control-connection.timeout = 10 seconds } I also switched my Spring Boot version to 2.5.0.M3 to see property files works it does not. I have pushed my project to my GitHub account. Update As per the comment, I am pasting my whole stack trace. Also, this does not happen all the time sometimes it works sometimes it does not. I need to override the timeout from PT2S to PT10S or something. org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraConverter' defined in class path resource [com/example/demo/CassandraConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.cassandra.core.convert.CassandraConverter]: Factory method 'cassandraConverter' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession' defined in class path resource [com/example/demo/CassandraConfig.class]: Invocation of init method failed; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:656) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:484) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1338) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1177) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:226) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:895) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878) ~[spring-context-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:758) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:750) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE] at com.example.demo.SpringCassandraTestingApplication.main(SpringCassandraTestingApplication.java:13) [classes/:na] Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.cassandra.core.convert.CassandraConverter]: Factory method 'cassandraConverter' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession' defined in class path resource [com/example/demo/CassandraConfig.class]: Invocation of init method failed; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:651) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] ... 19 common frames omitted Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession' defined in class path resource [com/example/demo/CassandraConfig.class]: Invocation of init method failed; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1796) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:226) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:227) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1174) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveBean(DefaultListableBeanFactory.java:422) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:352) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:345) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.data.cassandra.config.AbstractSessionConfiguration.requireBeanOfType(AbstractSessionConfiguration.java:100) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE] at org.springframework.data.cassandra.config.AbstractSessionConfiguration.getRequiredSession(AbstractSessionConfiguration.java:200) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE] at org.springframework.data.cassandra.config.AbstractCassandraConfiguration.cassandraConverter(AbstractCassandraConfiguration.java:73) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE] at com.example.demo.CassandraConfig$$EnhancerBySpringCGLIB$$cec229ff.CGLIB$cassandraConverter$12(<generated>) ~[classes/:na] at com.example.demo.CassandraConfig$$EnhancerBySpringCGLIB$$cec229ff$$FastClassBySpringCGLIB$$faa9c2c1.invoke(<generated>) ~[classes/:na] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.2.6.RELEASE.jar:5.2.6.RELEASE] at com.example.demo.CassandraConfig$$EnhancerBySpringCGLIB$$cec229ff.cassandraConverter(<generated>) ~[classes/:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_275] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_275] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_275] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_275] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] ... 20 common frames omitted Caused by: com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S at com.datastax.oss.driver.api.core.DriverTimeoutException.copy(DriverTimeoutException.java:34) ~[java-driver-core-4.6.1.jar:na] at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149) ~[java-driver-core-4.6.1.jar:na] at com.datastax.oss.driver.api.core.session.Session.refreshSchema(Session.java:140) ~[java-driver-core-4.6.1.jar:na] at org.springframework.data.cassandra.config.CqlSessionFactoryBean.afterPropertiesSet(CqlSessionFactoryBean.java:437) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1855) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1792) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE] ... 43 common frames omitted
I am answering my own question here to make this complete and let others know how I fixed this particular problem. I am using Spring Boot 2.4.5. and I started facing this timeout issue when I upgraded to version 2.3+ onwards. Based on my experience with this issue, below is what I found. Irrespective of whatever timeout you provide in the application.properties or application.conf (DataStax notion), They all somehow getting overridden by the spring boot or maybe selecting the default value from the DataStax driver. Even there is an issue created on the Spring Boot official project to handle this problem. Check here. Which got fixed later in the 2.5.0.M1 version. My problem got fixed when I passed this as a VM argument. $java -Ddatastax-java-driver.basic.request.timeout="15 seconds" application.jar I passed other params as well like advanced.control-connection.timeout as I was suggested to use on a different forum but that didn't work for me. Check reference manual here for other config params. I am getting this error at my local only so I passed this in the Eclipse VM argument and then I didn't see that error any more. Also if I reduce this time to 7-8 seconds sometimes I see that error again PT2S. Seems like that exception message is hardcoded somewhere irrespective of whatever timeout value you pass. (That is my observation). Update: Solution 2 - Which I figured out later and I see many of the people have answered that too The actual key that DataStax provides is given below and this works. @Bean public DriverConfigLoaderBuilderCustomizer defaultProfile(){ return builder -> builder.withString(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, "3 seconds").build(); }
Cassandra
67,217,692
10
How is columnar storage in the context of a NoSQL database like Cassandra different from that in Redshift. If Cassandra is also a columnar storage then why isn't it used for OLAP applications like Redshift?
The storage engines of Cassandra and Redshift are very different, and are created for different cases. Cassandra's storage not really "columnar" in wide known meaning of this type of databases, like Redshift, Vertica etc, it is much more closer to key-value family in NoSQL world. The SQL syntax used in Cassandra is not any ANSI SQL, and it has very limited set of queries that can be ran there. Cassandra's engine built for fast writing and reading of records, based on key, while Redshift's engine is built for fast aggregations (MPP), and has wide support for analytical queries, and stores,encodes and compresses data on column level. It can be easily understood with following example: Suppose we have a table with user id and many metrics (for example weight, height, blood pressure etc...). I we will run aggregate the query in Redshift, like average weight, it will do the following (in best scenario): Master will send query to nodes. Only the data for this specific column will be fetched from storage. The query will be executed in parallel on all nodes. Final result will be fetched to master. Running same query in Cassandra, will result in scan of all "rows", and each "row" can have several versions, and only the latest should be used in aggregation. If you familiar with any key-value store (Redis, Riak, DynamoDB etc..) it is less effective than scanning all keys there. Cassandra many times used for analytical workflows with Spark, acting as a storage layer, while Spark acting as actual query engine, and basically shouldn't be used for analytical queries by its own. With each version released more and more aggregation capabilities are added, but it is very far from being real analytical database.
Cassandra
52,739,192
10
In the CQL shell, how do I list all users? I can't seem to find it anywhere on Stack Overflow.
Prior to the introduction of roles in Cassandra 2.2, authentication and authorization were based around the concept of a USER. Cassandra 2.2 onward CQL uses database roles to represent users and group of users. So to list users use below command accordingly. LIST USERS; -- cassandra version < 2.2 LIST ROLES; -- cassandra version >= 2.2
Cassandra
52,617,258
10
I started using SASI indexing and used the following setup, CREATE TABLE employee ( id int, lastname text, firstname text, dateofbirth date, PRIMARY KEY (id, lastname, firstname) ) WITH CLUSTERING ORDER BY (lastname ASC, firstname ASC)); CREATE CUSTOM INDEX employee_firstname_idx ON employee (firstname) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = {'mode': 'CONTAINS', 'analyzer_class': 'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer', 'case_sensitive': 'false'}; I perform the following query, SELECT * FROM employee WHERE firstname like '%s'; As per my study, It seems the same as normal secondary indexing in Cassandra, Except providing the LIKE search, 1) Could somebody explain how it differs from normal secondary index in Cassandra? 2) What are the best configurations like mode, analyzer_class and case_sensitive - Any recommended documentation for this?
1) Could somebody explain how it differs from normal secondary index in Cassandra? Normal secondary index is essentially another lookup table comprising secondary index columns & primary key. Hence it has its own set of sstable files (disk), memtable (memory) and write overhead (cpu). SASI was an improvement open sourced (contributed by Apple) to Cassandra community. This index gets created for every SSTable being flushed to disk and doesn't maintain a separate table. Hence less disk usage, no separate memtable/bloom filter/partition index (less memory) and minimal overhead. 2) What are the best configurations like mode, analyzer_class and case_sensitive - Any recommended documentation for this? Configuration depends on your use case :- Essentially there are three modes PREFIX - Used to serve LIKE queries based on prefix of indexed column CONTAINS - Used to serve LIKE queries based on whether the search term exists in the indexed column SPARSE - Used to index data that is sparse (every term/column value has less than 5 matching keys). For example range queries that span large timestamps. Analyzer_class : Analyzers can be specified that will analyze the text in the specified column. The NonTokenizingAnalyzer is used for cases where the text is not analyzed, but case normalization or sensitivity is required. The StandardAnalyzer is used for analysis that involves stemming, case normalization, case sensitivity, skipping common words like "and" and "the", and localization of the language used to complete the analysis case_sensitive : As name implies, whether the indexed column should be searched case insensitive. Applicable values are True False Detailed documentation reference here and detailed blog post on performance.
Cassandra
48,734,670
10
I'm trying to find a way to log all queries done on a Cassandra from a python code. Specifically logging as they're done executing using a BatchStatement Are there any hooks or callbacks I can use to log this?
2 options: Stick to session.add_request_init_listener From the source code: a) BoundStatement https://github.com/datastax/python-driver/blob/3.11.0/cassandra/query.py#L560 The passed values are stored in raw_values, you can try to extract it b) BatchStatement https://github.com/datastax/python-driver/blob/3.11.0/cassandra/query.py#L676 It stores all the statements and parameters used to construct this object in _statements_and_parameters. Seems it can be fetched although it’s not a public property c) Only this hook is called, I didn’t manage to find any other hooks https://github.com/datastax/python-driver/blob/master/cassandra/cluster.py#L2097 But it has nothing to do with queries actual execution - it's just a way to inspect what kind of queries has been constructed and maybe add additional callbacks/errbacks Approach it from a different angle and use traces https://datastax.github.io/python-driver/faq.html#how-do-i-trace-a-request https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.ResponseFuture.get_all_query_traces Request tracing can be turned on for any request by setting trace=True in Session.execute_async(). View the results by waiting on the future, then ResponseFuture.get_query_trace() Here's an example of BatchStatement tracing using option 2: bs = BatchStatement() bs.add_all(['insert into test.test(test_type, test_desc) values (%s, %s)', 'insert into test.test(test_type, test_desc) values (%s, %s)', 'delete from test.test where test_type=%s', 'update test.test set test_desc=%s where test_type=%s'], [['hello1', 'hello1'], ['hello2', 'hello2'], ['hello2'], ['hello100', 'hello1']]) res = session.execute(bs, trace=True) trace = res.get_query_trace() for event in trace.events: if event.description.startswith('Parsing'): print event.description It produces the following output: Parsing insert into test.test(test_type, test_desc) values ('hello1', 'hello1') Parsing insert into test.test(test_type, test_desc) values ('hello2', 'hello2') Parsing delete from test.test where test_type='hello2' Parsing update test.test set test_desc='hello100' where test_type='hello1'
Cassandra
46,773,522
10
I am using the following filters in Postman to make a POST request in a Web API but I am unable to make a simple POST request in Python with the requests library. First, I am sending a POST request to this URL (http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets) with the following filters in Postman applied to the Body, with the raw and JSON(application/json) options selected. Filters in Postman { "filter": { "filters": [ { "field": "RCA_Assigned_Date", "operator": "gte", "value": "2017-05-31 00:00:00" }, { "field": "RCA_Assigned_Date", "operator": "lte", "value": "2017-06-04 00:00:00" }, { "field": "T_Subcategory", "operator": "neq", "value": "Temporary Degradation" }, { "field": "Issue_Status", "operator": "neq", "value": "Queued" }], "logic": "and" } } The database where the data is stored is Cassandra and according to the following links Cassandra not equal operator, Cassandra OR operator, Cassandra Between order by operators, Cassandra does not support the NOT EQUAL TO, OR, BETWEEN operators, so there is no way I can filter the URL with these operators except with AND. Second, I am using the following code to apply a simple filter with the requests library. import requests payload = {'field':'T_Subcategory','operator':'neq','value':'Temporary Degradation'} url = requests.post("http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets",data=payload) But what I've got is the complete data of tickets instead of only those that are not temporary degradation. Third, the system is actually working but we are experiencing a delay of 2-3 mins to see the data. The logic goes as follows: We have 8 users and we want to see all the tickets per user that are not temporary degradation, then we do: def get_json(): if user_name == "user 001": with urllib.request.urlopen( "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets?user_name=user&001",timeout=15) as url: complete_data = json.loads(url.read().decode()) elif user_name == "user 002": with urllib.request.urlopen( "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets?user_name=user&002",timeout=15) as url: complete_data = json.loads(url.read().decode()) return complete_data def get_tickets_not_temp_degradation(start_date,end_date,complete_): return Counter([k['user_name'] for k in complete_data if start_date < dateutil.parser.parse(k.get('DateTime')) < end_date and k['T_subcategory'] != 'Temporary Degradation']) Basically, we get the whole set of tickets from the current and last year, then we let Python to filter the complete set by user and so far there are only 10 users which means that this process is repeated 10 times and makes me no surprise to discover why we get the delay... My questions is how can I fix this problem of the requests library? I am using the following link Requests library documentation as a tutorial to make it working but it just seems that my payload is not being read.
Your Postman request is a JSON body. Just reproduce that same body in Python. Your Python code is not sending JSON, nor is it sending the same data as your Postman sample. For starters, sending a dictionary via the data arguments encodes that dictionary to application/x-www-form-urlencoded form, not JSON. Secondly, you appear to be sending a single filter. The following code replicates your Postman post exactly: import requests filters = {"filter": { "filters": [{ "field": "RCA_Assigned_Date", "operator": "gte", "value": "2017-05-31 00:00:00" }, { "field": "RCA_Assigned_Date", "operator": "lte", "value": "2017-06-04 00:00:00" }, { "field": "T_Subcategory", "operator": "neq", "value": "Temporary Degradation" }, { "field": "Issue_Status", "operator": "neq", "value": "Queued" }], "logic": "and" }} url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets" response = requests.post(url, json=filters) Note that filters is a Python data structure here, and that it is passed to the json keyword argument. Using the latter does two things: Encode the Python data structure to JSON (producing the exact same JSON value as your raw Postman body value). Set the Content-Type header to application/json (as you did in your Postman configuration by picking the JSON option in the dropdown menu after picking raw for the body). requests is otherwise just an HTTP API, it can't make Cassandra do any more than any other HTTP library. The urllib.request.urlopen code sends GET requests, and are trivially translated to requests with: def get_json(): url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets" response = requests.get(url, params={'user_name': user}, timeout=15) return response.json() I removed the if branching and replaced that with using the params argument, which translates a dictionary of key-value pairs to a correctly encoded URL query (passing in the user name as the user_name key). Note the json() call on the response; this takes care of decoding JSON data coming back from the server. This still takes long, you are not filtering the Cassandra data much here.
Cassandra
45,201,628
10
I understand that text and varchar are aliases, which store UTF-8 strings. What about ASCII, which in the documentation says "US-ASCII character string"? What's the difference besides encoding? Is there any size difference? Is the a preferred choice between these two when I'm storing large strings (~500KB)?
Regarding this anwer: If the data is a piece of text, for example a String in Java, which is encoded in UTF-16 in the runtime, but when serialized in Cassandra with text type then UTF-8 is used. UTF-16 always use 2 bytes per character and sometime 4 bytes, but UTF-8 is space efficient and depending on the character can be 1, 2, 3 or 4 bytes long. That mean that there's CPU work to serialize such data for encoding/decoding purpose. Also depending on the text for example 158786464563, data will be stored with 12 bytes. That means more space is used and more IO as well. Note cassandra offers the ascii type that follows the US-ASCII character set and is always using 1 byte per character. Is there any size difference? Yes Is the a preferred choice between these two when I'm storing large strings (~500KB)? Yes Because ascii is more space efficient than UTF-8 and UTF-8 is more space efficient than UTF-16. Again all of the things depends how you are serializing/encoding/decoding those data. For more check-out this "what-is-the-advantage-of-choosing-ascii-encoding-over-utf-8"
Cassandra
45,017,699
10
Is there an "IF EXISTS UPDATE ELSE INSERT" command in CQL (Cassandra)? If not, what's the most efficient way to perform such a query?
Basically you should use update. In cassandra they have a bit different mechanics when compared to relational world and will work as implied inserts. Answer: Does an UPDATE become an implied INSERT
Cassandra
43,395,738
10
I use Cassandra java driver. I receive 150k requests per second, which I insert to 8 tables having different partition keys. My question is which is a better way: batch inserting to these tables inserting one by one. I am asking this question because , considering my request size (150k), batch sounds like the better option but because all the tables have different partition keys, batch appears expensive.
Please check my answer from below link: Cassandra batch query performance on tables having different partition keys Batches are not for improving performance. They are used for ensuring atomicity and isolation. Batching can be effective for single partition write operations. But batches are often mistakenly used in an attempt to optimize performance. Depending on the batch operation, the performance may actually worsen. https://docs.datastax.com/en/cql/3.3/cql/cql_using/useBatch.html If data consistency is not needed among those tables, then use single insert. Single requests are distributed or propagated properly (depends on load balancing policy) among nodes. If you are concerned about request handling and use batch, batches will burden so many extra works on coordinator nodes which will not be efficient I guess :)
Cassandra
42,930,498
10
In "Cassandra The Definitive Guide" (2nd edition) by Jeff Carpenter & Eben Hewitt, the following formula is used to calculate the size of a table on disk (apologies for the blurred part): ck: primary key columns cs: static columns cr: regular columns cc: clustering columns Nr: number of rows Nv: it's used for counting the total size of the timestamps (I don't get this part completely, but for now I'll ignore it). There are two things I don't understand in this equation. First: why do clustering columns size gets counted for every regular column? Shouldn't we multiply it by the number of rows? It seems to me that by calculating this way, we're saying that the data in each clustering column, gets replicated for each regular column, which I suppose is not the case. Second: why do primary key columns don't get multiplied by the number of partitions? From my understanding, if we have a node with two partitions, then we should multiply the size of the primary key columns by two because we'll have two different primary keys in that node.
It's because of Cassandra's version < 3 internal structure. There is only one entry for each distinct partition key value. For each distinct partition key value there is only one entry for static column There is an empty entry for the clustering key For each column in a row there is a single entry for each clustering key column Let's take an example : CREATE TABLE my_table ( pk1 int, pk2 int, ck1 int, ck2 int, d1 int, d2 int, s int static, PRIMARY KEY ((pk1, pk2), ck1, ck2) ); Insert some dummy data : pk1 | pk2 | ck1 | ck2 | s | d1 | d2 -----+-----+-----+------+-------+--------+--------- 1 | 10 | 100 | 1000 | 10000 | 100000 | 1000000 1 | 10 | 100 | 1001 | 10000 | 100001 | 1000001 2 | 20 | 200 | 2000 | 20000 | 200000 | 2000001 Internal structure will be : |100:1000: |100:1000:d1|100:1000:d2|100:1001: |100:1001:d1|100:1001:d2| -----+-------+-----------+-----------+-----------+-----------+-----------+-----------+ 1:10 | 10000 | | 100000 | 1000000 | | 100001 | 1000001 | |200:2000: |200:2000:d1|200:2000:d2| -----+-------+-----------+-----------+-----------+ 2:20 | 20000 | | 200000 | 2000000 | So size of the table will be : Single Partition Size = (4 + 4 + 4 + 4) + 4 + 2 * ((4 + (4 + 4)) + (4 + (4 + 4))) byte = 68 byte Estimated Table Size = Single Partition Size * Number Of Partition = 68 * 2 byte = 136 byte Here all of the field type is int (4 byte) There is 4 primary key column, 1 static column, 2 clustering key column and 2 regular column More : http://opensourceconnections.com/blog/2013/07/24/understanding-how-cql3-maps-to-cassandras-internal-data-structure/
Cassandra
42,736,040
10
I was trying out a simple connection to my Cassandra instance through Java. I made a 'demo' keyspace to cqlsh and created a table in the java program. The code is below: Jars Used: slf4j.api-1.6.1 cassandra-all-2.1.2 public class CassandraConnection { public static void main(String[] args){ String ipAddress="127.0.0.1"; String keySpace="demo"; Cluster cluster; Session session; cluster=Cluster.builder().addContactPoint(ipAddress).build(); session=cluster.connect(keySpace); System.out.println("====================Before insert"); String cqlInsertStmt="insert into users (lastname,age,city,email,firstname) values" +"('Gopalan',32,'Paramakkudi','[email protected]','Murugan') "; session.execute(cqlInsertStmt); String cqlSelectStmt="select * from users"; ResultSet resultSet=session.execute(cqlSelectStmt); System.out.println("=================After insert"); for(Row row: resultSet){ System.out.format("%s %s %d %s %s \n", row.getString("firstname"),row.getString("lastname"),row.getInt("age"),row.getString("city"),row.getString("email")); } System.out.println("=================After update"); } } I am getting the following error: Failed to instantiate SLF4J LoggerFactory Reported exception: java.lang.NoClassDefFoundError: ch/qos/logback/core/joran/spi/JoranException at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383) at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:60) at CassandraConnection.main(CassandraConnection.java:21) Caused by: java.lang.ClassNotFoundException: ch.qos.logback.core.joran.spi.JoranException at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 7 more Exception in thread "main" java.lang.NoClassDefFoundError: ch/qos/logback/core/joran/spi/JoranException at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383) at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:60) at CassandraConnection.main(CassandraConnection.java:21) Caused by: java.lang.ClassNotFoundException: ch.qos.logback.core.joran.spi.JoranException at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 7 more
You have to make sure that the logback JAR is within your classpath. See here for starters; and beyond that; the real take-away here: the runtime is telling you that it can't find a certain class; and it gives you the full name of that class. Or you look here to read what Cassandra has to say about logback. You take that input; and then you turn to your favorite search engine in order to figure what is going on.
Cassandra
42,205,247
10
The exact Exception is as follows com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal] These are the versions of Software I am using Spark 1.5 Datastax-cassandra 3.2.1 CDH 5.5.1 The code I am trying to execute is a Spark program using the java api and it basically reads data (csv's) from hdfs and loads it into cassandra tables . I am using the spark-cassandra-connector. I had a lot of issues regarding the google s guava library conflict initially which I was able to resolve by shading the guava library and building a snap-shot jar with all the dependencies. However I was able to load data for some files but for some files I get the Codec Exception . When I researched on this issue I got these following threads on the same issue. https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/yZyaOQ-wazk https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/yZyaOQ-wazk After going through these discussion what I understand is either it is a wrong version of the cassandra-driver I am using . Or there is still a class path issue related to the guava library as cassandra 3.0 and later versions use guava 16.0.1 and the discussions above say that there might be a lower version of the guava present in the class path . Here is pom.xml file <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>com.datastax.spark</groupId> <artifactId>spark-cassandra-connector-java_2.10</artifactId> <version>1.5.0-M3</version> </dependency> <dependency> <groupId>org.apache.cassandra</groupId> <artifactId>cassandra-clientutil</artifactId> <version>3.2.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.3</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> <relocations> <relocation> <pattern>com.google</pattern> <shadedPattern>com.pointcross.shaded.google</shadedPattern> </relocation> </relocations> <minimizeJar>false</minimizeJar> <shadedArtifactAttached>true</shadedArtifactAttached> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> and these are the dependencies that were downloaded using the above pom spark-core_2.10-1.5.0.jar spark-cassandra-connector- java_2.10-1.5.0-M3.jar spark-cassandra-connector_2.10-1.5.0-M3.jar spark-repl_2.10-1.5.1.jar spark-bagel_2.10-1.5.1.jar spark-mllib_2.10-1.5.1.jar spark-streaming_2.10-1.5.1.jar spark-graphx_2.10-1.5.1.jar guava-16.0.1.jar cassandra-clientutil-3.2.1.jar cassandra-driver-core-3.0.0-alpha4.jar Above are some of the main dependencies on in my snap-shot jar. Y is the CodecNotFoundException ? Is it because of the class path (guava) ? or cassandra-driver (cassandra-driver-core-3.0.0-alpha4.jar for datastax cassandra 3.2.1) or because of the code . Another point is all the dates I am inserting to columns who's data type is timestamp . Also when I do a spark-submit I see the class path in the logs , There are other guava versions which are under the hadoop libs . R these causing the problem ? How do we specify the a user-specific class path while do a spark-submit. Will that help ? Would be glad to get some points on these. Thanks Following is the stacktrace com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [timestamp <-> java.lang.String] at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:689) at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:550) at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:530) at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:485) at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:85) at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:198) at com.datastax.driver.core.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:126) at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:223) at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:1) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1027) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1555) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) I also got com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [Math.BigDecimal <-> java.lang.String]
When you call bind(params...) on a PreparedStatement the driver expects you to provide values w/ java types that map to the cql types. This error ([timestamp <-> java.lang.String]) is telling you that there is no such Codec registered that maps the java String to a cql timestamp. In the java driver, the timestamp type maps to java.util.Date. So you have 2 options here: Where the column being bound is for a timestamp, provide a Date-typed value instead of a String. Create a codec that maps timestamp <-> String. To do so you could create sub class of MappingCodec as described on the documentation site, that maps String to timestamp: public class TimestampAsStringCodec extends MappingCodec<String, Date> { public TimestampAsStringCodec() { super(TypeCodec.timestamp(), String.class); } @Override protected Date serialize(String value) { ... } @Override protected String deserialize(Date value) { ... } } You then would need to register the Codec: cluster.getConfiguration().getCodecRegistry() .register(new TimestampAsStringCodec());
Cassandra
37,588,733
10
I'm testing Cassandra as time series database. I create data model as below: CREATE KEYSPACE sm WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': 1 }; USE sm; CREATE TABLE newdata (timestamp timestamp, deviceid int, tagid int, decvalue decimal, alphavalue text, PRIMARY KEY (deviceid,tagid,timestamp)); In the Primary key, I set deviceid as the partition key which mean all data with same device id will write into one node (does it mean one machine or one partition. Each partition can have max 2 billion rows) also if I query data within the same node, the retrieval will be fast, am I correct? I’m new to Cassandra and a bit confused about the partition key and clustering key. Most of my query will be as below: select lastest timestamp of know deviceid and tagid Select decvalue of known deviceid and tagid and timestamp Select alphavalue of known deviceid and tagid and timestamp select * of know deviceid and tagid with time range select * of known deviceid with time range I will have around 2000 deviceid, each deviceid will have 60 tagid/value pair. I'm not sure if it will be a wide rows of deviceid, timestamp, tagid/value, tagid/value....
I’m new to Cassandra and a bit confused about the partition key and clustering key. It sounds like you understand partition keys, so I'll just add that your partition key helps Cassandra figure out where (which token range) in the cluster to store your data. Each node is responsible for several primary token ranges (assuming vnodes). When your data is written to a data partition, it is sorted by your clustering keys. This is also how it is stored on-disk, so remember that your clustering keys determine the order in which your data is stored on disk. Each partition can have max 2 billion rows That's not exactly true. Each partition can support up to 2 billion cells. A cell is essentially a column name/value pair. And your clustering keys add up to a single cell by themselves. So compute your cells by counting the column values that you store for each CQL row, and add one more if you use clustering columns. Depending on your wide row structure you will probably have a limitation of far fewer than 2 billion rows. Additionally, that's just the storage limitation. Even if you managed to store 1 million CQL rows in a single partition, querying that partition would return so much data that it would be ungainly and probably time-out. if I query data within the same node, the retrieval will be fast, am I correct? It'll at least be faster than multi-key queries that hit multiple nodes. But whether or not it will be "fast" depends on other things, like how wide your rows are, and how often you do things like deletes and in-place updates. Most of my query will be as below: select lastest timestamp of know deviceid and tagid Select decvalue of known deviceid and tagid and timestamp Select alphavalue of known deviceid and tagid and timestamp select * of know deviceid and tagid with time range select * of known deviceid with time range Your current data model can support all of those queries, except for the last one. In order to perform a range query on timestamp, you'll need to duplicate your data into a new table, and build a PRIMARY KEY to support that query pattern. This is called "query-based modeling." I would build a query table like this: CREATE TABLE newdata_by_deviceid_and_time ( timestamp timestamp, deviceid int, tagid int, decvalue decimal, alphavalue text, PRIMARY KEY (deviceid,timestamp)); That table can support a range query on timestamp, while partitioning on deviceid. But the biggest problem I see with either of these models, is that of "unbounded row growth." Basically, as you collect more and more values for your devices, you will approach the 2 billion cell limit per partition (and again, things will probably get slow way before that). What you need to do, is use a modeling technique called "time bucketing." For the example, I'll say that I determined that bucketing by month would keep me well under the 2 billion cells limit and allow for the type of date range flexibility that I needed. If so, I would add an additional partition key monthbucket and my (new) table would look like this: CREATE TABLE newdata_by_deviceid_and_time ( timestamp timestamp, deviceid int, tagid int, decvalue decimal, alphavalue text, monthbucket text, PRIMARY KEY ((deviceid,monthbucket),timestamp)); Now when I wanted to query for data in a specific device and date range, I would also specify the monthbucket: SELECT * FROM newdata_by_deviceid_and_time WHERE deviceid='AA23' AND monthbucket='201603' AND timestamp >= '2016-03-01 00:00:00-0500' AND timestamp < '2016-03-16 00:00:00-0500'; Remember, monthbucket is just an example. For you, it may make more sense to use quarter or even year (assuming that you don't store too many values per deviceid in a year).
Cassandra
36,048,660
10
As stated in title I'm wondering if is it necessary to spark-submit *.jar? I'm using Datastax Enterprise Cassandra for a while, but now I need to use Spark too. I watched almost all videos from DS320: DataStax Enterprise Analytics with Apache Spark and there is nothing about connecting to spark remotely from java application. Now I have 3 running nodes of DSE. I can connect to Spark from spark shell. But after 2 days of trying to connect to Spark from java code I'm giving up. This is my Java code SparkConf sparkConf = new SparkConf(); sparkConf.setAppName("AppName"); //sparkConf.set("spark.shuffle.blockTransferService", "nio"); //sparkConf.set("spark.driver.host", "*.*.*.*"); //sparkConf.set("spark.driver.port", "7007"); sparkConf.setMaster("spark://*.*.*.*:7077"); JavaSparkContext sc = new JavaSparkContext(sparkConf); Result of connecting 16/01/18 14:32:43 ERROR TransportResponseHandler: Still have 2 requests outstanding when connection from *.*.*.*/*.*.*.*:7077 is closed 16/01/18 14:32:43 WARN AppClient$ClientEndpoint: Failed to connect to master *.*.*.*:7077 java.io.IOException: Connection from *.*.*.*/*.*.*.*:7077 closed at org.apache.spark.network.client.TransportResponseHandler.channelUnregistered(TransportResponseHandler.java:124) at org.apache.spark.network.server.TransportChannelHandler.channelUnregistered(TransportChannelHandler.java:94) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:739) at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:659) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/01/18 14:33:03 ERROR TransportResponseHandler: Still have 2 requests outstanding when connection from *.*.*.*/*.*.*.*:7077 is closed 16/01/18 14:33:03 WARN AppClient$ClientEndpoint: Failed to connect to master *.*.*.*:7077 java.io.IOException: Connection from *.*.*.*/*.*.*.*:7077 closed at org.apache.spark.network.client.TransportResponseHandler.channelUnregistered(TransportResponseHandler.java:124) at org.apache.spark.network.server.TransportChannelHandler.channelUnregistered(TransportChannelHandler.java:94) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158) at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144) at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:739) at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:659) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/01/18 14:33:23 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up. 16/01/18 14:33:23 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet. 16/01/18 14:33:23 WARN AppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master 16/01/18 14:33:23 ERROR MapOutputTrackerMaster: Error communicating with MapOutputTracker java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326) at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:190) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110) at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120) at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462) at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93) at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229) at org.apache.spark.SparkContext.stop(SparkContext.scala:1755) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127) at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134) at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 16/01/18 14:33:23 ERROR Utils: Uncaught exception in thread appclient-registration-retry-thread org.apache.spark.SparkException: Error communicating with MapOutputTracker at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:114) at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120) at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462) at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93) at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229) at org.apache.spark.SparkContext.stop(SparkContext.scala:1755) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127) at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134) at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326) at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:190) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110) ... 18 more 16/01/18 14:33:23 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main] org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up. at org.apache.spark.scheduler.TaskSchedulerImpl.error(TaskSchedulerImpl.scala:438) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:124) at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134) at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) I tried to change SPARK_MASTER_IP, SPARK_LOCAL_IP and many others config variables, but without success. Now I found some articles about submiting jars to Spark and I'm not sure (can't find any proof) if it is the cause? Are spark-submit and interactive shell the only ways to use spark? Any articles about it? I would be grateful if you could give me a tip.
I would greatly recommend using dse spark-submit with dse. While it is not required it is definitely much easier than ensuring that your security and class path options as set up for DSE will work with your cluster. It also provides a much simpler approach (in my opinion) for configuring your SparkConf and placing jars on the executor class-paths. Within DSE it also will automatically route your application to the correct Spark master url, further simplifying setup. If you really want to manually construct your SparkConf be sure to map your spark master to the output of dsetool spark-master or it's equivalent in your version of DSE.
Cassandra
34,876,451
10
I'm trying to use the spark-cassandra-connector via spark-shell on dataproc, however I am unable to connect to my cluster. It appears that there is a version mismatch since the classpath is including a much older guava version from somewhere else, even when I specify the proper version on startup. I suspect this is likely caused by all the Hadoop dependencies put into the classpath by default. Is there anyway to have spark-shell use only the proper version of guava, without getting rid of all the Hadoop-related dataproc included jars? Relevant Data: Starting spark-shell, showing it having the proper version of Guava: $ spark-shell --packages com.datastax.spark:spark-cassandra-connector_2.10:1.5.0-M3 :: loading settings :: url = jar:file:/usr/lib/spark/lib/spark-assembly-1.5.2-hadoop2.7.1.jar!/org/apache/ivy/core/settings/ivysettings.xml com.datastax.spark#spark-cassandra-connector_2.10 added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 confs: [default] found com.datastax.spark#spark-cassandra-connector_2.10;1.5.0-M3 in central found org.apache.cassandra#cassandra-clientutil;2.2.2 in central found com.datastax.cassandra#cassandra-driver-core;3.0.0-alpha4 in central found io.netty#netty-handler;4.0.27.Final in central found io.netty#netty-buffer;4.0.27.Final in central found io.netty#netty-common;4.0.27.Final in central found io.netty#netty-transport;4.0.27.Final in central found io.netty#netty-codec;4.0.27.Final in central found com.codahale.metrics#metrics-core;3.0.2 in central found org.slf4j#slf4j-api;1.7.5 in central found org.apache.commons#commons-lang3;3.3.2 in central found com.google.guava#guava;16.0.1 in central found org.joda#joda-convert;1.2 in central found joda-time#joda-time;2.3 in central found com.twitter#jsr166e;1.1.0 in central found org.scala-lang#scala-reflect;2.10.5 in central :: resolution report :: resolve 502ms :: artifacts dl 10ms :: modules in use: com.codahale.metrics#metrics-core;3.0.2 from central in [default] com.datastax.cassandra#cassandra-driver-core;3.0.0-alpha4 from central in [default] com.datastax.spark#spark-cassandra-connector_2.10;1.5.0-M3 from central in [default] com.google.guava#guava;16.0.1 from central in [default] com.twitter#jsr166e;1.1.0 from central in [default] io.netty#netty-buffer;4.0.27.Final from central in [default] io.netty#netty-codec;4.0.27.Final from central in [default] io.netty#netty-common;4.0.27.Final from central in [default] io.netty#netty-handler;4.0.27.Final from central in [default] io.netty#netty-transport;4.0.27.Final from central in [default] joda-time#joda-time;2.3 from central in [default] org.apache.cassandra#cassandra-clientutil;2.2.2 from central in [default] org.apache.commons#commons-lang3;3.3.2 from central in [default] org.joda#joda-convert;1.2 from central in [default] org.scala-lang#scala-reflect;2.10.5 from central in [default] org.slf4j#slf4j-api;1.7.5 from central in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 16 | 0 | 0 | 0 || 16 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent confs: [default] 0 artifacts copied, 16 already retrieved (0kB/12ms) Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.5.2 /_/ Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.8.0_66-internal) Type in expressions to have them evaluated. Type :help for more information. 15/12/10 17:38:46 WARN org.apache.spark.metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. Spark context available as sc. ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/conf.dist/ivysettings.xml will be used ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/conf.dist/ivysettings.xml will be used 15/12/10 17:38:54 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/12/10 17:38:54 WARN org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. SQL context available as sqlContext. Stack Trace when doing initial connection: java.io.IOException: Failed to open native connection to Cassandra at {10.240.0.7}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120) at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:249) at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:51) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:59) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:59) at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:146) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) at org.apache.spark.rdd.RDD.count(RDD.scala:1125) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:34) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47) at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49) at $iwC$$iwC$$iwC$$iwC.<init>(<console>:51) at $iwC$$iwC$$iwC.<init>(<console>:53) at $iwC$$iwC.<init>(<console>:55) at $iwC.<init>(<console>:57) at <init>(<console>:59) at .<init>(<console>:63) at .<clinit>(<console>) at .<init>(<console>:7) at .<clinit>(<console>) at $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$pasteCommand(SparkILoop.scala:825) at org.apache.spark.repl.SparkILoop$$anonfun$standardCommands$8.apply(SparkILoop.scala:345) at org.apache.spark.repl.SparkILoop$$anonfun$standardCommands$8.apply(SparkILoop.scala:345) at scala.tools.nsc.interpreter.LoopCommands$LoopCommand$$anonfun$nullary$1.apply(LoopCommands.scala:65) at scala.tools.nsc.interpreter.LoopCommands$LoopCommand$$anonfun$nullary$1.apply(LoopCommands.scala:65) at scala.tools.nsc.interpreter.LoopCommands$NullaryCmd.apply(LoopCommands.scala:76) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:809) at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657) at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/Listenab leFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:178) at com.datastax.driver.core.Connection$Factory.open(Connection.java:742) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:240) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:187) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1393) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:402) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 70 more
Unfortunately, Hadoop's dependency on Guava 11 (which doesn't have the Futures.withFallback method mentioned) is a longstanding issue and indeed Hadoop 2.7.1 still depends on Guava 11. Spark core uses Guava 14, as can be seen here but this is worked around by shading Guava inside the Spark assembly: $ jar tf /usr/lib/spark/lib/spark-assembly.jar | grep concurrent.Futures org/spark-project/guava/util/concurrent/Futures$1.class org/spark-project/guava/util/concurrent/Futures$2.class org/spark-project/guava/util/concurrent/Futures$3.class org/spark-project/guava/util/concurrent/Futures$4.class org/spark-project/guava/util/concurrent/Futures$5.class org/spark-project/guava/util/concurrent/Futures$6.class org/spark-project/guava/util/concurrent/Futures$ChainingListenableFuture$1.class org/spark-project/guava/util/concurrent/Futures$ChainingListenableFuture.class org/spark-project/guava/util/concurrent/Futures$CombinedFuture$1.class org/spark-project/guava/util/concurrent/Futures$CombinedFuture$2.class org/spark-project/guava/util/concurrent/Futures$CombinedFuture.class org/spark-project/guava/util/concurrent/Futures$FallbackFuture$1$1.class org/spark-project/guava/util/concurrent/Futures$FallbackFuture$1.class org/spark-project/guava/util/concurrent/Futures$FallbackFuture.class org/spark-project/guava/util/concurrent/Futures$FutureCombiner.class org/spark-project/guava/util/concurrent/Futures$ImmediateCancelledFuture.class org/spark-project/guava/util/concurrent/Futures$ImmediateFailedCheckedFuture.class org/spark-project/guava/util/concurrent/Futures$ImmediateFailedFuture.class org/spark-project/guava/util/concurrent/Futures$ImmediateFuture.class org/spark-project/guava/util/concurrent/Futures$ImmediateSuccessfulCheckedFuture.class org/spark-project/guava/util/concurrent/Futures$ImmediateSuccessfulFuture.class org/spark-project/guava/util/concurrent/Futures$MappingCheckedFuture.class org/spark-project/guava/util/concurrent/Futures.class $ javap -cp /usr/lib/spark/lib/spark-assembly.jar org.spark-project.guava.util.concurrent.Futures Compiled from "Futures.java" public final class org.spark-project.guava.util.concurrent.Futures { public static <V, X extends java.lang.Exception> org.spark-project.guava.util.concurrent.CheckedFuture<V, X> makeChecked(org.spark-project.guava.util.concurrent.ListenableFuture<V>, com.google.common.base.Function<java.lang.Exception, X>); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<V> immediateFuture(V); public static <V, X extends java.lang.Exception> org.spark-project.guava.util.concurrent.CheckedFuture<V, X> immediateCheckedFuture(V); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<V> immediateFailedFuture(java.lang.Throwable); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<V> immediateCancelledFuture(); public static <V, X extends java.lang.Exception> org.spark-project.guava.util.concurrent.CheckedFuture<V, X> immediateFailedCheckedFuture(X); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<V> withFallback(org.spark-project.guava.util.concurrent.ListenableFuture<? extends V>, org.spark-project.guava.util.concurrent.FutureFallback<? extends V>); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<V> withFallback(org.spark-project.guava.util.concurrent.ListenableFuture<? extends V>, org.spark-project.guava.util.concurrent.FutureFallback<? extends V>, java.util.concurrent.Executor); public static <I, O> org.spark-project.guava.util.concurrent.ListenableFuture<O> transform(org.spark-project.guava.util.concurrent.ListenableFuture<I>, org.spark-project.guava.util.concurrent.AsyncFunction<? super I, ? extends O>); public static <I, O> org.spark-project.guava.util.concurrent.ListenableFuture<O> transform(org.spark-project.guava.util.concurrent.ListenableFuture<I>, org.spark-project.guava.util.concurrent.AsyncFunction<? super I, ? extends O>, java.util.concurrent.Executor); public static <I, O> org.spark-project.guava.util.concurrent.ListenableFuture<O> transform(org.spark-project.guava.util.concurrent.ListenableFuture<I>, com.google.common.base.Function<? super I, ? extends O>); public static <I, O> org.spark-project.guava.util.concurrent.ListenableFuture<O> transform(org.spark-project.guava.util.concurrent.ListenableFuture<I>, com.google.common.base.Function<? super I, ? extends O>, java.util.concurrent.Executor); public static <I, O> java.util.concurrent.Future<O> lazyTransform(java.util.concurrent.Future<I>, com.google.common.base.Function<? super I, ? extends O>); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<V> dereference(org.spark-project.guava.util.concurrent.ListenableFuture<? extends org.spark-project.guava.util.concurrent.ListenableFuture<? extends V>>); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<java.util.List<V>> allAsList(org.spark-project.guava.util.concurrent.ListenableFuture<? extends V>...); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<java.util.List<V>> allAsList(java.lang.Iterable<? extends org.spark-project.guava.util.concurrent.ListenableFuture<? extends V>>); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<java.util.List<V>> successfulAsList(org.spark-project.guava.util.concurrent.ListenableFuture<? extends V>...); public static <V> org.spark-project.guava.util.concurrent.ListenableFuture<java.util.List<V>> successfulAsList(java.lang.Iterable<? extends org.spark-project.guava.util.concurrent.ListenableFuture<? extends V>>); public static <V> void addCallback(org.spark-project.guava.util.concurrent.ListenableFuture<V>, org.spark-project.guava.util.concurrent.FutureCallback<? super V>); public static <V> void addCallback(org.spark-project.guava.util.concurrent.ListenableFuture<V>, org.spark-project.guava.util.concurrent.FutureCallback<? super V>, java.util.concurrent.Executor); public static <V, X extends java.lang.Exception> V get(java.util.concurrent.Future<V>, java.lang.Class<X>) throws X; public static <V, X extends java.lang.Exception> V get(java.util.concurrent.Future<V>, long, java.util.concurrent.TimeUnit, java.lang.Class<X>) throws X; public static <V> V getUnchecked(java.util.concurrent.Future<V>); static {}; } You can follow the instructions here https://arjon.es/2015/making-hadoop-2.6-spark-cassandra-driver-play-nice-together/ to also do shading yourself during compilation. With spark-shell you may be able to get away with some changes in spark.driver.extraClassPath as mentioned here, though collisions may then continue to arise at various points.
Cassandra
34,209,329
10
I have a 4 node cluster with 16 core CPU and 100 GB RAM on each box (2 nodes on each rack). As of now, all are running with default JVM settings of Cassandra (v2.1.4). With this setting, each node uses 13GB RAM and 30% CPU. It is a write heavy cluster with occasional deletes or updates. Do I need to tune the JVM settings of Cassandra to utilize more memory? What all things should I be looking at to make appropriate settings?
Do I need to tune the JVM settings of Cassandra to utilize more memory? The DataStax Tuning Java Resources doc actually has some pretty sound advice on this: Many users new to Cassandra are tempted to turn up Java heap size too high, which consumes the majority of the underlying system's RAM. In most cases, increasing the Java heap size is actually detrimental for these reasons: In most cases, the capability of Java to gracefully handle garbage collection above 8GB quickly diminishes. Modern operating systems maintain the OS page cache for frequently accessed data and are very good at keeping this data in memory, but can be prevented from doing its job by an elevated Java heap size. If you have more than 2GB of system memory, which is typical, keep the size of the Java heap relatively small to allow more memory for the page cache. As you have 100GB of RAM on your machines, (if you are indeed running under the "default JVM settings") your JVM max heap size should be capped at 8192M. And actually, I wouldn't deviate from that that unless you are experiencing issues with garbage collection. JVM resources for Cassandra can be set in the cassandra-env.sh file. If you are curious, look at the code for cassandra-env.sh and look for the calculate_heap_sizes() method. That should give you some insight as to how Cassandra computes your default JVM settings. What all things should I be looking at to make appropriate settings? If you are running OpsCenter (and you should be), add a graph for "Heap Used" and "Non Heap Used." This will allow you to easily monitor JVM heap usage for your cluster. Another thing that helped me, was to write a bash script in which I basically hijacked the JVM calculations from cassandra-env.sh. That way I can run it on a new machine, and know right away what my MAX_HEAP_SIZE and HEAP_NEWSIZE are going to be: #!/bin/bash clear echo "This is how Cassandra will determine its default Heap and GC Generation sizes." system_memory_in_mb=`free -m | awk '/Mem:/ {print $2}'` half_system_memory_in_mb=`expr $system_memory_in_mb / 2` quarter_system_memory_in_mb=`expr $half_system_memory_in_mb / 2` echo " memory = $system_memory_in_mb" echo " half = $half_system_memory_in_mb" echo " quarter = $quarter_system_memory_in_mb" echo "cpu cores = "`egrep -c 'processor([[:space:]]+):.*' /proc/cpuinfo` #cassandra-env logic duped here #this should help you to see how much memory is being allocated #to the JVM if [ "$half_system_memory_in_mb" -gt "1024" ] then half_system_memory_in_mb="1024" fi if [ "$quarter_system_memory_in_mb" -gt "8192" ] then quarter_system_memory_in_mb="8192" fi if [ "$half_system_memory_in_mb" -gt "$quarter_system_memory_in_mb" ] then max_heap_size_in_mb="$half_system_memory_in_mb" else max_heap_size_in_mb="$quarter_system_memory_in_mb" fi MAX_HEAP_SIZE="${max_heap_size_in_mb}M" # Young gen: min(max_sensible_per_modern_cpu_core * num_cores, 1/4 * heap size) max_sensible_yg_per_core_in_mb="100" max_sensible_yg_in_mb=`expr ($max_sensible_yg_per_core_in_mb * $system_cpu_cores)` desired_yg_in_mb=`expr $max_heap_size_in_mb / 4` if [ "$desired_yg_in_mb" -gt "$max_sensible_yg_in_mb" ] then HEAP_NEWSIZE="${max_sensible_yg_in_mb}M" else HEAP_NEWSIZE="${desired_yg_in_mb}M" fi echo "Max heap size = " $MAX_HEAP_SIZE echo " New gen size = " $HEAP_NEWSIZE Update 20160212: Also, be sure to check-out Amy Tobey's 2.1 Cassandra Tuning Guide. She has some great tips on how to get your cluster running optimally.
Cassandra
30,207,779
10
Is there a way I could control max size of a SSTable, for example 100 MB so that when there is actually more than 100MB of data for a CF, then Cassandra creates next SSTable?
Unfortunately the answer is not so simple, the sizes of your SSTables will be influenced by your compaction Strategy and there is no direct way to control your max sstable size. SSTables are initially created when memtables are flushed to disk as SSTables. The size of these tables initially depends on your memtable settings and the size of your heap (memtable_total_space_in_mb being a large influencer). Typically these SSTables are pretty small. SSTables get merged together as part of a process called compaction. If you use Size-Tiered Compaction Strategy you have an opportunity to have really large SSTables. STCS will combine SSTables in a minor compaction when there are at least min_threshold (default 4) sstables of the same size by combining them into one file, expiring data and merging keys. This has the possibility to create very large SSTables after a while. Using Leveled Compaction Strategy there is a sstable_size_in_mb option that controls a target size for SSTables. In general SSTables will be less than or equal to this size unless you have a partition key with a lot of data ('wide rows'). I haven't experimented much with Date-Tiered Compaction Strategy yet, but that works similar to STCS in that it merges files of the same size, but it keeps data together in time order and it has a configuration to stop compacting old data (max_sstable_age_days) which could be interesting. The key is to find the compaction strategy which works best for your data and then tune the properties around what works best for your data model / environment. You can read more about the configuration settings for compaction here and read this guide to help understand whether STCS or LCS is appropriate for you.
Cassandra
29,392,153
10
There doesn't seem to be any direct way to know affected rows in cassandra for update, and delete statements. For example if I have a query like this: DELETE FROM xyztable WHERE PKEY IN (1,2,3,4,5,6); Now, of course, since I've passed 6 keys, it is obvious that 6 rows will be affected. But, like in RDBMS world, is there any way to know affected rows in update/delete statements in datastax-driver? I've read cassandra gives no feedback on write operations here. Except that I could not see any other discussion on this topic through google. If that's not possible, can I be sure that with the type of query given above, it will either delete all or fail to delete all?
In the eventually consistent world you can look at these operations as if it was saving a delete request, and depending on the requested consistency level, waiting for a confirmation from several nodes that this request has been accepted. Then the request is delivered to the other nodes asynchronously. Since there is no dependency on anything like foreign keys, then nothing should stop data from being deleted if the request was successfully accepted by the cluster. However, there are a lot of ifs. For example, deleting data with a consistency level one, successfully accepted by one node, followed by an immediate node hard failure may result in the loss of that delete if it was not replicated before the failure. Another example - during the deletion, one node was down, and stayed down for a significant amount of time, more than the gc_grace_period, i.e., more than it is required for the tombstones to be removed with deleted data. Then if this node is recovered, then all suddenly all data that has been deleted from the rest of the cluster, but not from this node, will be brought back to the cluster. So in order to avoid these situations, and consider operations successful and final, a cassandra admin needs to implement some measures, including regular repair jobs (to make sure all nodes are up to date). Also applications need to decide what is better - faster performance with consistency level one at the expense of possible data loss, vs lower performance with higher consistency levels but with less possibility of data loss.
Cassandra
28,611,459
10
My understanding that rows are overwritten when another row with identical primary keys is inserted. For example: I have columns (user_id int, item_id int, site_id int), and my PRIMARY KEY(user_id, item_id) If I had the following table: user_id, item_id, site_id 2 3 4 and I insert user_id : 2, item_id : 3, site_id : 10, my new table would be: user_id, item_id, site_id 2 3 10 not user_id, item_id, site_id 2 3 4 2 3 10 Is this simple case hold in all cases? Are any subtleties that I likely not aware off? Also, I could not find this in the docs and came to this conclusion by playing around with cassandra, can anyone provide a doc source?
Yes, this is how Cassandra is designed to operate. In all cases where an UPDATE or INSERT is executed, data will be updated (based on the keys) if it exists, and inserted it it does not. An important point to remember, is that under the hood, UPDATE and INSERT are synonymous. If you think about those two as being the same, then you can start to understand why it works the way that it does. That being said, you are correct, in that you do have to look closely to find an explicit reference to this behavior in the documentation. I found the closest references in the docs and listed them below: From the UPDATE documentation: The row is created if none existed before, and updated otherwise. Specify the row to update in the WHERE clause by including all columns composing the partition key. ... The UPDATE SET operation is not valid on a primary key field. From the INSERT documentation: You do not have to define all columns, except those that make up the key. ... If the column exists, it is updated. The row is created if none exists. Now while these excerpts may not come right out and say "be careful not to overwrite", I did manage to find an article on Planet Cassandra that was more explicit: How to Do an Upsert in Cassandra Cassandra is a distributed database that avoids reading before a write, so an INSERT or UPDATE sets the column values you specify regardless of whether the row already exists. This means inserts can update existing rows, and updates can create new rows. It also means it’s easy to accidentally overwrite existing data, so keep that in mind.
Cassandra
28,350,630
10
I'm trying to get the number of key value pairs in a Cassandra column family. Following is the code I used. PreparedStatement statement = client.session .prepare("select count(*) from corpus.word_usage"); ResultSet results = client.session.execute(statement.bind()); Row row = results.one(); System.out.println(row.getVarint(0)); But when I ran this code, I'm getting following exception. Exception in thread "main" com.datastax.driver.core.exceptions.InvalidTypeException: Column count is of type bigint at com.datastax.driver.core.ColumnDefinitions.checkType(ColumnDefinitions.java:291) at com.datastax.driver.core.ArrayBackedRow.getVarint(ArrayBackedRow.java:185) at SimpleClient.main(SimpleClient.java:57) According to datastax documentation (http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/Row.html) getVarint should return a BigInteger. So why I am getting a exception here? What an I doing wrong?
You can get value as a long, instead. I couldn't test it but could you try this: PreparedStatement statement = client.session.prepare("select count(*) from corpus.word_usage"); ResultSet results = client.session.execute(statement.bind()); Row row = results.one(); long expected = row.getLong("count");
Cassandra
27,421,535
10
I was wondering if it was a good idea (performance wise) to have multiple counters on the same table/columnfamily in Cassandra? My current setup is this: CREATE TABLE IF NOT EXISTS contentCounters ( downvotes counter, comments counter, upvotes counter, contentid uuid, PRIMARY KEY (contentid) ); But I'm not sure whether or not it's ok, in terms of performance, to have multiple counters on the same table. Previously, I had 3 tables with counters (tracking upvote, downvote and comment counts) but I would like to combine them all to the above hierarchy to allow for fast queries to this table to get these values (as that is my use-case). Any advise would be greatly appreciated. Many thanks,
I don't think this should be an issue. Cassandra doesn't update "rows" in their entirety. The "row" itself isn't locked during updates, but based on this article for 2.1+, just the counter column in the UPDATE statement identified by the specified partition key. There is a better counter implementation in 2.1+ than in previous versions. Read more here
Cassandra
25,146,373
10
I am wondering what is the advantage of using BoundStatement over PreparedStatement? PreparedStatement statement = session.prepare( "INSERT INTO simplex.songs " + "(id, title, album, artist) " + "VALUES (?, ?, ?, ?);"); BoundStatement boundStatement = new BoundStatement(statement); session.execute(boundStatement.bind( UUID.fromString("756716f7-2e54-4715-9f00-91debea6cf50"), "La Petite Tonkinoise", "Bye Bye Blackbird", "Joséphine Baker"); The simplest way would be: PreparedStatement ps = session.prepare( "INSERT INTO simplex.songs " + "(id, title, album, artist, tags) " + "VALUES (?, ?, ?, ?, ?);"); ps.bind(UUID.fromString("756716f7-2e54-4715-9f00-91debea6cf50"), "La Petite Tonkinoise", "Bye Bye Blackbird", "Joséphine Baker"); As you can see, I can bind data to preparedStatement without boundStatements. Where are boundStatement useful?
No advantage: a BoundStatement is nothing more than a PreparedStatement with variables bounded. The bind() method of a PreparedStatements, in fact, returns a BoundStatement.
Cassandra
25,130,236
10
So, my original problem was using the token() function to page through a large data set in Cassandra 1.2.9, as explained and answered here: Paging large resultsets in Cassandra with CQL3 with varchar keys The accepted answer got the select working with tokens and chunk size, but another problem manifested itself. My table looks like this in cqlsh: key | column1 | value ---------------+-----------------------+------- 85.166.4.140 | county_finnmark | 4 85.166.4.140 | county_id_20020 | 4 85.166.4.140 | municipality_alta | 2 85.166.4.140 | municipality_id_20441 | 2 93.89.124.241 | county_hedmark | 24 93.89.124.241 | county_id_20005 | 24 The primary key is a composite of key and column1. In CLI, the same data looks like this: get ip['85.166.4.140']; => (counter=county_finnmark, value=4) => (counter=county_id_20020, value=4) => (counter=municipality_alta, value=2) => (counter=municipality_id_20441, value=2) Returned 4 results. The problem When using cql with a limit of i.e. 100, the returned results may stop in the middle of a record, like this: key | column1 | value ---------------+-----------------------+------- 85.166.4.140 | county_finnmark | 4 85.166.4.140 | county_id_20020 | 4 leaving these to "rows" (columns) out: 85.166.4.140 | municipality_alta | 2 85.166.4.140 | municipality_id_20441 | 2 Now, when I use the token() function for the next page like, these two rows are skipped: select * from ip where token(key) > token('85.166.4.140') limit 10; Result: key | column1 | value ---------------+------------------------+------- 93.89.124.241 | county_hedmark | 24 93.89.124.241 | county_id_20005 | 24 95.169.53.204 | county_id_20006 | 2 95.169.53.204 | county_oppland | 2 So, no trace of the last two results from the previous IP address. Question How can I use token() for paging without skipping over cql rows? Something like: select * from ip where token(key) > token(key:column1) limit 10;
Ok, so I used the info in this post to work out a solution: http://www.datastax.com/dev/blog/cql3-table-support-in-hadoop-pig-and-hive (section "CQL3 pagination"). First, I execute this cql: select * from ip limit 5000; From the last row in the resultset, I get the key (i.e. '85.166.4.140') and the value from column1 (i.e. 'county_id_20020'). Then I create a prepared statement evaluating to select * from ip where token(key) = token('85.166.4.140') and column1 > 'county_id_20020' ALLOW FILTERING; (I'm guessing it would work also without using the token() function, as the check is now for equal:) select * from ip where key = '85.166.4.140' and column1 > 'county_id_20020' ALLOW FILTERING; The resultset now contains the remaining X rows (columns) for this IP. The method then returns all the rows, and the next call to the method includes the last used key ('85.166.4.140'). With this key, I can execute the following select: select * from ip where token(key) > token('85.166.4.140') limit 5000; which gives me the next 5000 rows from (and including) the first IP after '85.166.4.140'. Now, no columns are lost in the paging. UPDATE Cassandra 2.0 introduced automatic paging, handled by the client. More info here: http://www.datastax.com/dev/blog/client-side-improvements-in-cassandra-2-0 (note that setFetchSize is optional and not necessary for paging to work)
Cassandra
23,625,481
10
I am new to Datastax cassandra. While going through the installation procedure of cassandra. It is recommended that the swap area of OS should be turned off. Does anyone provide the reason for that? Will it affect any OS level operations ?
In production, if your database is using swap you will have very bad performance. In a ring of Cassandra nodes, you are better off having one node go completely down than allowing it to limp along in swap. The easiest way to ensure that you never go into swap is to simply disable it.
Cassandra
22,988,824
10
We're looking to be able to do something like.. select * from User where rewards.size > 0 when the schema is create table user ( id uuid primary key, network_id uuid, rewards set<uuid>, name text );
sadly Cassandra is eager performing very quick reads and on the other hand is not perfectly done supporting collections: Get count of elements in Set type column in Cassandra unfortunately the Collections-support even in CQL Driver v2 is not perfect: you may add or delete items in upsert statements. But more on them, like doing an item select, asking for collection item's TTLs or asking for the collection's size, is not supported. So you have to resultset: SELECT collection_column FROM ... and then take item by resultset.one() or resultset.all() and get item.size() yourself. What you want to do is adding an indexed column with a counter to count and fast-read the element count. Iff you just want to know whether the collection is not empty you may need an index column with a boolean. The index makes sure you can scan the columns efficiently.
Cassandra
20,555,428
10
How can I check if a non-primary key field's value is either 'A' or 'B' with a Cassandra CQL query? (I'm using Cassandra 2.0.1) Here's the table definition: CREATE TABLE my_table ( my_field text, my_field2 text, PRIMARY KEY (my_field) ); I tried: 1> SELECT * FROM my_table WHERE my_field2 IN ('A', 'B'); 2> SELECT * FROM my_table WHERE my_field2 = 'A' OR my_field = 'B' ; The first one failed with this messeage: Bad Request: IN predicates on non-primary-key columns (my_field2) is not yet supported The second one failed because Cassandra CQL doesn't support OR keyword I couldn't get this simple query working (with a pretty straight forward way). I'm pretty frustrated dealing with CQL queries in general. Is it because Cassandra is not mature enough and has really poor support with queries, or is it me who must change the way of thinking?
This is the intentional functionality of cassandra. You cannot query using a WHERE clause on columns that are not the partition key part of a composite key This is because your data is partitioned around a ring of cassandra nodes. You want to avoid having to ask the entire ring to return the answer to your query. Ideally you want to be able to retrieve your data from a single node in the ring Generally in cassandra you want to structure your table to match your queries as opposed to relational normalization. So you have a few options to deal with this. 1) write your data to multiple tables to support various queries. In your case you may want to create a second table as CREATE TABLE my_table ( my_field2 text, my_field text, PRIMARY KEY (my_field2) ); Then your first query will return correctly 2) Create your table with a composite key as CREATE TABLE my_table ( my_field text, my_field2 text, PRIMARY KEY (my_field, my_field2) ); With this method, if you do not specify a query value for my_field then you will need to append your query with a qualifier to tell cassandra that you really want to query the entire ring SELECT * FROM my_table WHERE my_field2 IN ('A', 'B') ALLOW FILTERING; -edit- you cannot use a secondary index to search for multiple values. Per CQL documentation http://www.datastax.com/documentation/cql/3.0/webhelp/cql/ddl/ddl_primary_index_c.html "An index is a data structure that allows for fast, efficient lookup of data matching a given condition." So, you must give it one and only one value.
Cassandra
19,231,778
10
How do you query and filter by timeuuid, ie assuming you have a table with create table mystuff(uuid timeuuid primary key, stuff text); ie how do you do: select uuid, unixTimestampOf(uuid), stuff from mystuff order by uuid desc limit 2000 I also want to be able to fetch the next older 2000 and so on, but thats a different problem. The error is: Bad Request: ORDER BY is only supported when the partition key is restricted by an EQ or an IN. and just in case it matters, the real table is actually this: CREATE TABLE audit_event ( uuid timeuuid PRIMARY KEY, event_time bigint, ip text, level text, message text, person_uuid timeuuid ) WITH bloom_filter_fp_chance=0.010000 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.000000 AND gc_grace_seconds=864000 AND read_repair_chance=0.100000 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'};
I would recommend that you design your table a bit differently. It would be rather hard to achieve what you're asking for with the design you have currently. At the moment each of your entries in the audit_event table will receive another uuid, internally Cassandra will create many short rows. Querying for such rows is inefficient, and additionally they are ordered randomly (unless using Byte Ordered Partitioner, which you should avoid for good reasons). However Cassandra is pretty good at sorting columns. If (back to your example) you declared your table like this : CREATE TABLE mystuff( yymmddhh varchar, created timeuuid, stuff text, PRIMARY KEY(yymmddhh, created) ); Cassandra internally would create a row, where the key would be the hour of a day, column names would be the actual created timestamp and data would be the stuff. That would make it efficient to query. Consider you have following data (to make it easier I won't go to 2k records, but the idea is the same): insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '90'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '91'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '92'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '93'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '94'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '95'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '96'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '97'); insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '98'); Now lets say that we want to select last two entries (let's a assume for the moment that we know that the "latest" row key to be '13081616'), you can do it by executing query like this: SELECT * FROM mystuff WHERE yymmddhh = '13081616' ORDER BY created DESC LIMIT 2 ; which should give you something like this: yymmddhh | created | stuff ----------+--------------------------------------+------- 13081616 | 547fe280-067e-11e3-8751-97db6b0653ce | 98 13081616 | 547f4640-067e-11e3-8751-97db6b0653ce | 97 to get next 2 rows you have to take the last value from the created column and use it for the next query: SELECT * FROM mystuff WHERE yymmddhh = '13081616' AND created < 547f4640-067e-11e3-8751-97db6b0653ce ORDER BY created DESC LIMIT 2 ; If you received less rows than expected you should change your row key to another hour. Row key handling / calculation For now I've assumed that we know the row key with which we want to query the data. If you log a lot of information I'd say that's not the problem - you can take just current time and issue a query with the hour set to what hour we have now. If we run out of rows we can subtract one hour and issue another query. However if you don't know where your data lies, or if it's not distributed evenly, you can create metadata table, where you'd store the information about the row keys: CREATE TABLE mystuff_metadata( yyyy varchar, yymmddhh varchar, PRIMARY KEY(yyyy, yymmddhh) ) WITH COMPACT STORAGE; The row keys would be organized by a year, so to get the latest row key from the current year you'd have to issue a query: SELECT yymmddhh FROM mystuff_metadata where yyyy = '2013' ORDER BY yymmddhh DESC LIMIT 1; Your audit software would have to make an entry to that table on start and later on each hour change (for example before inserting data to mystuff).
Cassandra
18,274,007
10
I have a little misunderstanding about composite row keys with CQL in Cassandra. Let's say I have the following cqlsh:testcql> CREATE TABLE Note ( ... key int, ... user text, ... name text ... , PRIMARY KEY (key, user) ... ); cqlsh:testcql> INSERT INTO Note (key, user, name) VALUES (1, 'user1', 'name1'); cqlsh:testcql> INSERT INTO Note (key, user, name) VALUES (1, 'user2', 'name1'); cqlsh:testcql> cqlsh:testcql> SELECT * FROM Note; key | user | name -----+-------+------- 1 | user1 | name1 1 | user2 | name1 How this data is stored? Are there 2 rows or one. If two then how it is possible to have more than one row with the same key? If one then having records with key=1 and user from "user1" to "user1000" does it mean it will have one row with key=1 and 1000 columns containing names for each user? Can someone explain what's going on on the background? Thanks.
So, after diging a bit more and reading an article suggested by Lyuben Todorov (thank you) I found the answer to my question. Cassandra stores data in data structures called rows which is totally different than relational databases. Rows have a unique key. Now, what's happening in my example... In table Note I have a composite key defined as PRIMARY KEY (key, user). Only the first element of this key acts as a row key and it's called partition key. Internally the rest of this key is used to build a composite columns. In my example key | user | name -----+-------+------- 1 | user1 | name1 1 | user2 | name1 This will be represented in Cassandra in one row as ------------------------------------- | | user1:name | user2:name | | 1 |-------------------------------- | | name1 | name1 | ------------------------------------- Having know that it's clear that it's not a good idea to add any column with huge amount of unique values (and growing) to the composite key because it will be stored in one row. Even worse if you have multiple columns like this in a composite primary key. Update: Later I found this blog post by Aaron Morton than explains the same in more details.
Cassandra
17,705,328
10
How can I pull in a range of Composite columns with CQL3? Consider the following: CREATE TABLE Stuff ( a int, b text, c text, d text, PRIMARY KEY (a,b,c) ); In Cassandra what this effectively does is creates a ColumnFamily with integer rows (values of a) and with CompositeColumns composed of the values of b and c and the literal string 'd'. Of course this is all covered up by CQL3 so that we will think that we're inserting into individual database rows... but I digress. And consider the following set of inputs: INSERT INTO Stuff (a,b,c,d) VALUES (1,'A','P','whatever0'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'A','Q','whatever1'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'A','R','whatever2'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'A','S','whatever3'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'A','T','whatever4'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'B','P','whatever5'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'B','Q','whatever6'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'B','R','whatever7'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'B','S','whatever8'); INSERT INTO Stuff (a,b,c,d) VALUES (1,'B','T','whatever9'); In my current use case, I want to read all of the values of Stuff, n values at a time. How do I do this? Here's my current take using n=4: SELECT * FROM Stuff WHERE a=1 LIMIT 4; And as expected I get: a | b | c | d ---+---+---+----------- 1 | A | P | whatever0 1 | A | Q | whatever1 1 | A | R | whatever2 1 | A | S | whatever3 The trouble that I run into is how do I get the next 4? Here is my attempt: SELECT * FROM Stuff WHERE a=1 AND b='A' AND c>'S' LIMIT 4; This doesn't work because we've constrained b to equal 'A' - which is a reasonable thing to do! But I've found nothing in the CQL3 syntax that allows me to keep iterating anyway. I wish I could do something like: SELECT * FROM Stuff WHERE a=1 AND {b,c} > {'A','S'} LIMIT 4; How do I achieve my desired result. Namely, how do I make CQL3 return: a | b | c | d ---+---+---+----------- 1 | A | T | whatever0 1 | B | P | whatever1 1 | B | Q | whatever2 1 | B | R | whatever3
Auto paging is done https://issues.apache.org/jira/browse/CASSANDRA-4415, it's release to Cassandra 2.0.1
Cassandra
17,664,438
10
I am newbie to cassandra . I need to copy data from one columnFamily to another columnFamily in same keyspace in cassandra .Say for ex we have a A1 columnFamily in keyspace K1 , so now i need to create columnFamily A2 in the same keyspace K1 .Here i need to copy data from columnFamily A1 to A2 .A1 and A2 have the same schema .I read online docs where in we can use sstable loader to copy data from one cassandra cluster to another . But here i need to copy data from one columnFamily to another columnFamily within same keyspace . Any ideas on above . Or is it achievable i am not sure of it .
Depending on the cassandra version you can use the copy cql command. To use that you need a cql client like the one that is distributted with cassandra. First you have to copy the columnfamily A1 to a CSV file using: COPY K1.A1 (column1, column2,...) TO 'temp.csv'; And after that copy the file to the new column family COPY K1.A2 (column1, column2,...) FROM 'temp.csv'; Obviously you have to change the name of the columns to your names. More information: http://www.datastax.com/documentation/cassandra/1.2/index.html#cassandra/cql_reference/copy_r.html
Cassandra
17,542,239
10
Prior to CQL3 one could insert arbitrary columns such as columns that are named by a date: cqlsh:test>CREATE TABLE seen_ships (day text PRIMARY KEY) WITH comparator=timestamp AND default_validation=text; cqlsh:test>INSERT INTO seen_ships (day, '2013-02-02 00:08:22') VALUES ('Tuesday', 'Sunrise'); Per this post It seems that things are different in CQL3. Is it still somehow possible to insert arbitrary columns? Here's my failed attempt: cqlsh:test>CREATE TABLE seen_ships ( day text, time_seen timestamp, shipname text, PRIMARY KEY (day, time_seen) ); cqlsh:test>INSERT INTO seen_ships (day, 'foo') VALUES ('Tuesday', 'bar'); Here I get Bad Request: line 1:29 no viable alternative at input 'foo' So I try a slightly different table because maybe this is a limitation of compound keys: cqlsh:test>CREATE TABLE seen_ships ( day text PRIMARY KEY ); cqlsh:test>INSERT INTO seen_ships (day, 'foo') VALUES ('Tuesday', 'bar'); Again with the Bad Request: line 1:29 no viable alternative at input 'foo' What am I missing here?
There's a good blog post over on the Datastax blog about this: http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows The answer is that yes, CQL3 supports dynamic colums, just not the way it worked in earlier versions of CQL. I don't really understand your example, you mix datestamps with strings in a way I don't see how it worked in CQL2 either. If I understand you correctly you want to make a timeline of ship sightings, where the partition key (row key) is the day and each sighting is a time/name pair. Here's a suggestion: CREATE TABLE ship_sightings ( day TEXT, time TIMESTAMP, ship TEXT, PRIMARY KEY (day, time) ) And you insert entries with INSERT INTO ship_sightings (day, time, ship) VALUES ('Tuesday', NOW(), 'Titanic') however, you should probably use a TIMEUUID instead of TIMESTAMP (and the primary key could be a DATE), since otherwise you might add two sightings with the same timestamp and only one will survive. This was an example of wide rows, but then there's the issue of dynamic columns, which isn't exactly the same thing. Here's an example of that in CQL3: CREATE TABLE ship_sightings_with_properties ( day TEXT, time TIMEUUID, ship TEXT, property TEXT, value TEXT, PRIMARY KEY (day, time, ship, property) ) which you can insert into like this: INSERT INTO ship_sightings_with_properties (day, time, ship, property, value) VALUES ('Sunday', NOW(), 'Titanic', 'Color', 'Black') # you need to repeat the INSERT INTO for each statement, multiple VALUES isn't # supported, but I've not included them here to make this example shorter VALUES ('Sunday', NOW(), 'Titanic', 'Captain', 'Edward John Smith') VALUES ('Sunday', NOW(), 'Titanic', 'Status', 'Steaming on') VALUES ('Monday', NOW(), 'Carapathia', 'Status', 'Saving the passengers off the Titanic') The downside with this kind of dynamic columns is that the property names will be stored multiple times (so if you have a thousand sightings in a row and each has a property called "Captain", that string is saved a thousand times). On-disk compression takes away most of that overhead, and most of the time it's nothing to worry about. Finally a note about collections in CQL3. They're a useful feature, but they are not a way to implement wide rows or dynamic columns. First of all they have a limit of 65536 items, but Cassandra can't enforce this limit, so if you add too many elements you might not be able to read them back later. Collections are mostly for small multi-values fields -- the canonical example is an address book where each row is an entry and where entries only have a single name, but multiple phone numbers, email addresses, etc.
Cassandra
17,439,179
10
I am using Cassandra 1.2.3 and can execute select query with Limit 10. If I want records from 10 to 20, I cannot do "Limit 10,20". Below query gives me an error. select * from page_view_counts limit 10,20 How can this be achieved? Thanks Nikhil
You can't do skips like this in CQL. You have have to do paging by specifying a start place e.g. select * from page_view_counts where field >= 'x' limit 10; to get the next 10 elements starting from x. I wrote a full example in this answer: Cassandra pagination: How to use get_slice to query a Cassandra 1.2 database from Python using the cql library.
Cassandra
17,025,490
10
I want to add columns dynamically to this column family via code using cql. CREATE COLUMN FAMILY blog_entry WITH comparator = UTF8Type AND key_validation_class=UTF8Type AND default_validation_class = UTF8Type; how shall I do it?
This is becoming something of a FAQ, so I wrote an in-depth explanation: http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows
Cassandra
16,938,285
10
IMPORTANT If you are dealing with this problem today, use the new cassandra-driver from datastax (i.e. import cassandra) since it solves most of this common problems and don't use the old cql driver anymore, it is obsolete! This question is old from before the new driver was even in development and we had to use an incomplete old library called cql (import cql <-- don't use this anymore, move to the new driver). Intro I'm using the python library cql to access a Cassandra 1.2 database. In the database I have a table with a timestamp column and in my Python code I have a datetime to be inserted in the column. Example as follows: Table CREATE TABLE test ( id text PRIMARY KEY, last_sent timestamp ); The code import cql import datetime ... cql_statement = "update test set last_sent = :last_sent where id =:id" rename_dict = {} rename_dict['id'] = 'someid' rename_dict['last_sent'] = datetime.datetime.now() cursor.execute (cql_statement, rename_dict) The problem When I execute the code the actual cql statement executed is like this: update test set last_sent =2013-05-13 15:12:51 where id = 'someid' Then it fails with an error Bad Request: line 1:XX missing EOF at '-05' The problem seems to be that the cql library is not escaping ('') or converting the datetime before running the query. The question What is the correct way of doing this without manually escaping the date and be able to store a full timestamp with more precision into a cassandra timestamp column? Thanks in advance!
I can tell you how to do it in cqlsh. Try this update test set last_sent =1368438171000 where id = 'someid' Equivalent long value for date time 2013-05-13 15:12:51 is 1368438171000
Cassandra
16,532,566
10
I am using following version of cql and cassandra. See below: cqlsh 2.3.0 | Cassandra 1.1.10 | CQL spec 3.0.0 | Thrift protocol 19.33.0 I have all CF and tons of data in it. When I run: cqlsh -2 or cqlsh -3 cql> USE "test_keyspace"; cql:test_keyspace> SELECT * FROM "column_family_name" LIMIT 1; Note: CFs were created using pycassa and am trying to read using cql. Not sure, if this will make a difference. It throws, Bad Request: unconfigured columnfamily Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh. Is there any specific settings, that I need to be looking at, to resolve this issue?
by default CQL3 is case-insensitive unless enclosed in double quotation marks. Try putting your CFNAME in double quotes. SELECT * FROM "CFNAME" LIMIT 1; Looks like you have forgotten to enable CQL3 from api at the time of table creation
Cassandra
16,153,280
10
As titled. I want to know whether my query is well optimized.
Yes, in Cassandra 1.2 you can turn on request tracing for queries.
Cassandra
15,716,107
10
I am using Cassandra 1.2.2. I am finding it so easy to use Jackson to map my objects to and fro json and java for storing in database. I am actually tempted to do this to all of my data. My question is, Is this a good idea? What are the disadvantages of doing this to my application. My first guess is probably more processing overheads but is the juice worth the squeeze? and are there any other disadvantages that i need to know about?
One disadvantage is that to modify the data you have to read in the original, deserialize, make your change, serialize and write out the whole object. In Cassandra, writes are much more efficient than reads so it is beneficial to avoid reads before writes if possible. The alternative is to use separate columns for each field in your JSON. You can use composite columns for multi-dimensional data. So if you had the data: { name: "fred" address: "some town" age: 42 } and you wanted to change the address, if you had these as separate Cassandra columns you'd just insert a column called address. If you had the JSON serialized you'd have to do much more work. This doesn't apply if your data is write-once. Even if your data is write-once, if you just wanted to read one field from the data you can just read that column if stored separately rather than reading the whole thing and deserializing. This only applies if you want to read parts of your data. In conclusion, there could be significant performance advantages to using separate columns if you have to update your data or if you only want to read parts at once.
Cassandra
15,500,898
10
Initially I started learning Cassandra because dynamic columns caught my attention. As I started learning more, I learnt that composite primary keys are preferred to dynamic columns and Cassandra is moving to schema based (Schema is optional and not compulsory but recommended). In cql3, it is compulsory though and I read cql3 is best approach for new applications in cassandra. Here is where I face an interesting question. I was reading a particular slide (Mysql vs Casssandra) - http://lanyrd.com/2012/austin-mysql-meetup-january/spdrx/ (Jump to 31 slide) where it discusses about fraud detection use case. "In FraudDetection To calculate risk, it is common to need to know all the emails,destinations,origins,devices,locations,phonenumbers,etcetera ever used for the account in question. " It was explained how we have to maintain individual table for emails, destinations , origins etc in relational world and how easy it is to be in cassandra world with dynamic column key and values. (31- 34 slides). Now that dynamic column keys and values and discouraged, how can we solve this problem ? Should we maintain individual column families for each emails, destinations etc ? Then how is it different from relational world ? Is it only about scalability ? Can we still go ahead with schema less approach ? Is this the golden rule "Schema is optional and recommended but not mandatory?" Thanks
Sorry for the confusion here. As it turned out , I didn't understand the basic concepts properly. Here is the answer Dynamic columns are core of Cassandra. They are still supported and is still the core :) Its just that in thrift you do directly and in CQL you do in a different manner(through schema way). But still you do it :) - Read this - http://www.datastax.com/dev/blog/thrift-to-cql3 And regarding how Cassandra is better than Mysql - read this http://lanyrd.com/2012/austin-mysql-meetup-january/spdrx/ (16-24 slides) Thanks :)
Cassandra
15,404,280
10
I am using Cassandra to store my data and hive to process my data. I have 5 machines on which i have set up cassandra and 2 machines I use as analytics node(where hive runs) So I want to ask is does hive do map reduce on just two machines(analytics nodes) and brings data there or it moves the process/computation to 5 cassandra nodes as well and process/compute the data on those machines.(What I know is in hadoop, process moves to data not data to process).
If you interested to marry Hadoop and Cassandra - the first link should DataStax company which is built around this concept. http://www.datastax.com/ They built and support hadoop with HDFS replaced with cassandra. In best of my understanding - they do have data locality:http://blog.octo.com/en/introduction-to-datastax-brisk-an-hadoop-and-cassandra-distribution/ There is good answer about Hadoop & Cassandra data locality if you run MapReduce against cassandra Cassandra and MapReduce - minimal setup requirements Regarding your question - there is a tradeof: a) If you run Hadoop / Hive on separate nodes you loose data locality and thereof your data throughput is limited by your network bandwidth. b) If you run hadoop / Hive on the same nodes as cassandra runs - you can get data locality but MapReduce processing behind hive queries might clogg your network (and other resources) and thereof affect your quality of service from cassandra. My suggestion will be to have separate hive nodes if performance of your cassandra cluster are critical. If your cassandra is mostly used as a data store and do not handle real-time requests - then running hive on each node will improve performance and hardware utilization.
Cassandra
14,827,693
10
So there is a fair amount of documentation on how to scale up a Cassandra, but is there a good resource on how to "unscale" Cassandra and remove nodes from the cluster? Is it as simple as turning off a node, letting the cluster sync up again, and repeating? The reason is for a site that expects high spikes of traffic, climbing from the daily few thousand hits to hundreds of thousands over a few days. The site will be "ramped up" before hand, starting up multiple instances of the web server, Cassandra, etc. After the torrent of requests subsides, the goal is to turn off the instances that are not longer used, rather than pay for servers that are just sitting around.
If you just shut the nodes down and rebalance cluster, you risk losing some data, that exist only on removed nodes and hasn't replicated yet. Safe cluster shrink can be easily done with nodetool. At first, run: nodetool drain ... on the node removed, to stop accepting writes and flush memtables, then: nodetool decommission To move node's data to other nodes, and then shut the node down, and run on some other node: nodetool removetoken ... to remove the node from the cluster completely. The detailed documentation might be found here: http://wiki.apache.org/cassandra/NodeTool From my experience, I'd recommend to remove nodes one-by-one, not in batches. It takes more time, but much more safe in case of network outages or hardware failures.
Cassandra
13,300,709
10
I'm having troubles understanding the replication factor in Cassandra. In the documentation it says that: "The total number of replicas across the cluster is often referred to as the replication factor". On the other hand, in the same documentation, it says that "NetworkTopologyStrategy allows you to specify how many replicas you want in each data center". So, if i have 2 datacenters with NetworkTopologyStrategy, a replication factor of 2 means i'll have 2 replicas per data center or 2 replicas overall in the cluster? Thank you.
When using the NetworkTopologyStrategy, you specify your replication factor on a per-data-center basis using strategy_options:{data-center-name}={rep-factor-value} rather than the global strategy_options:replication_factor={rep-factor-value}. Here's a concrete example adapted from http://www.datastax.com/docs/1.0/references/cql/CREATE_KEYSPACE CREATE KEYSPACE Excalibur WITH strategy_class = 'NetworkTopologyStrategy' AND strategy_options:DC1 = 2 AND strategy_options:DC2 = 2; In that example, any given column would be stored on 4 nodes total, with 2 in each data center.
Cassandra
12,544,844
10
I have a database in postgresql for a software as service with hundreds of customers, currently have a schema of postgresql for each customer, but i like a best solution because the customers rapidly increase. I read about cassandra but i don't wanna lose the integrity of primary,foregin keys and checks. Also read about postgresql in distributed systems, but i dont know what is the best way for implement this currently
There are four levels at which you can separate your customers: Run a separate PostgreSQL cluster for each customer. This provides maximum separation; each client is on a separate port with its own set of system tables, transaction log, etc. Put each customer in a separate database in the same cluster. This way they each have a separate login, but on the same port number, and they share global tables like pg_database. Give each customer a separate schema in the same database. This doesn't require separate user IDs if they are only connecting through your software, because you can just set the search_path. Of course you can use separate user IDs if you prefer. Make customer_id part of the primary key of each table, and be sure to limit by that in your software. This is likely to scale better than having duplicate tables for each of hundreds of users, but you must be very careful to always qualify your queries by customer_id. Some people have been known to combine these techniques, for example, limiting each cluster to 100 databases with a separate database for each customer. Without more detail it's hard to know which configuration will be best for your situation, except to say that if you want to allow users direct access to the database, without going through your software, you need to think about what is visible in system tables with each option. Look at pg_database, pg_user, and pg_class from a user perspective, to see what is exposed.
Cassandra
10,400,654
10
I have a 3 node Cassandra cluster with replication factor of 2. Because one of the nodes has been replaced with a new one. And I have used "nodetool repair" to repair all the keyspaces. But don't know how to verify that all the keyspaces are synced. Before, Just found this article would help, but a little. Cassandra Data Replication problem Is there any way to verify the keyspaces with replication factor > 1 in Cassandra? Thanks a lot. stephon
First, if you run nodetool repair again and very little data is transferred (assuming all nodes have been up since the last time you ran), you know that the data is almost perfectly in sync. You can look at the logs to see numbers on how much data is transferred during this process. Second, you can verify that all of the nodes are getting a similar number of writes by looking at the write counts with nodetool cfstats. Note that the write count value is reset each time Cassandra restarts, so if they weren't restarted around the same time, you'll have to see how quickly they are each increasing over time. Last, if you just want to spot check a few recently updated values, you can try reading those values at consistency level ONE. If you always get the most up-to-date version of the data, you'll know that the replicas are likely in sync. As a general note, replication is such an ingrained part of Cassandra that it's extremely unlikely to fail on its own without you noticing. Typically a node will be marked down shortly after problems start. Also, I'm assuming you're writing at consistency level ONE or ANY; with anything higher, you know for sure that both of the replicas have received the write.
Cassandra
9,885,281
10
From Cassandra's presentation slides (slide 2) link 1, alternate link: scaling writes to a relational database is virtually impossible I cannot understand this statement. Because when I shard my database, I am scaling writes isn't it? And they seem to claim against that.. does anyone know why isn't sharding a database scaling writes?
The slowness of physical disk subsystems is usually the single greatest challenge to overcome when trying to scale a database to service a very large number of concurrent writers. But it is not "virtually impossible" to optimize writes to a relational database. It can be done. Yet there is a trade-off: when you optimize writes, selects of large subsets of logically related data usually are slower. The writes of the primary data to disk and the rebalancing of index trees can be disk-intensive. The maintenance of clustered indexes, whereby rows that belong logically together are stored physically contiguous on disk, is also disk-intensive. Such indexes make selects (reads) quicker while slowing writes. A heavily indexed table does not scale well therefore, and the lower the cardinality of the index, the less well it scales. One optimization aimed at improving the speed of concurrent writers is to use sparse tables with hashed primary keys and minimal indexing. This approach eliminates the need for an index on the primary key value and permits an immediate seek to the disk location where a row lives, 'immediate' in the sense that the intermediary of an index read is not required. The hashed primary key algorithm returns the physical address of the row using the primary key value itself-- a simple computation that requires no disk access. The sparse table is exactly the opposite of storing logically related data so they are physically contiguous. In a sparse table, writers do not step on each others toes, so to speak. Writes are like raindrops falling on a large field not like a crowd of people on a subway platform trying to step into the train through a few open doors. The sparse table helps to eliminate write bottlenecks. However, because logically related data are not physically contiguous, but scattered, the act of gathering all rows in a certain zipcode, say, is expensive. This sparse-table hashed-pk optimization is therefore optimal only when the predominant activity is the insertion of records, the update of individual records, and the lookup of data relating to a single entity at a time rather than to a large set of entities, as in, say, an order-entry system. A company that sold merchandise on TV and had to service tens of thousands of simultaneous callers placing orders would be well served by a system that used sparse tables with hashed primary keys. A national security database that relied upon linked lists would also be well served by this approach. Many social networking applications could also use it to advantage.
Cassandra
6,668,360
10
How does a Cassandra Secondary-Index work internally? The docs state it is some kind of Hash Index: Given i have the colum username="foobar" (Column username will be scondary index) in a CF User with RandomOrderingPartitioner Is my asumption correct, that cassandra uses a "Distributed Hash Index" (=so the index is not on one single node=the index is splitted)? On how many nodes are the index-parts held (the same amout as the replicatio factor)? On which nodes are the index-parts held (does Cassandra split the index by the same logic as the key with RandomOrderingPartitioner)? In case the index is hold on only one node (and of course replicated), how does cassandra "determin" the node that is responsible for the index (By hashing the columname and then using the randompartitioner logik to determine the node)? Is it really true, that this index is optimized for low cardinality? If yes, what is a rough estimate ( is there a concrete figure that i can use to judge), that I should not use a secondary index (and rather use a seperate CF for the index)? Or said differently how to calculate the cardinality and make the right decision? I am trying to understand this.
Secondary indexes are basically just another column family. They are not directly accessible to users, but you can see statistics via the JMX bean: org.apache.cassandra.db.IndexedColumnFamilies You can consult the statistics here to gauge the effectiveness of your index as you would a normal column family. For more details see these previous posts: How are Cassandra's 0.7 Secondary Indexes stored? How scalable are automatic secondary indexes in Cassandra 0.7? And since you have a hector tag, here is a link to the test case for IndexedSlicesQuery: https://github.com/rantav/hector/blob/master/core/src/test/java/me/prettyprint/cassandra/model/IndexedSlicesQueryTest.java
Cassandra
6,418,181
10
I cannot retrieve data from Cassandra after I exit the cli. When I go back to get the data in another session, I get this error: org.apache.cassandra.db.marshal.MarshalException: cannot parse 'jsmith' as hex bytes It seems the column family stays and the keyspace too. I also changed the keys format to ascii (assume Users keys as ascii;)... And that does not stay set either. Is there a reason why? What is going on?
All of the "assume" commands are temporary and limited to a single cli session. For those not sitting in front of the cli right now, we're referring to these: assume <cf> comparator as <type>; assume <cf> sub_comparator as <type>; assume <cf> validator as <type>; assume <cf> keys as <type>; Assume one of the attributes (comparator, sub_comparator, validator or keys) of the given column family match specified type. The specified type will be used when displaying data returned from the column family. These are purely client side. What you want to do instead is to record that metadata permanently in the ColumnFamily definition, e.g. from the readme, create column family Users with comparator=UTF8Type and default_validation_class=UTF8Type and key_validation_class=UTF8Type; This is different from "assume" in 3 ways: It is permanently recorded in Cassandra Cassandra will enforce the specified type against data inserted All clients can query and use this metadata to do intelligent things with it, not just the cli Assume is primarily there as a workaround for old data sets that cannot be updated to use a "real" server-side type.
Cassandra
6,208,104
10
I created a new Pylons project, and would like to use Cassandra as my database server. I plan on using Pycassa to be able to use cassandra 0.7beta. Unfortunately, I don't know where to instantiate the connection to make it available in my application. The goal would be to : Create a pool when the application is launched Get a connection from the pool for each request, and make it available to my controllers and libraries (in the context of the request). The best would be to get a connexion from the pool "lazily", i.e. only if needed If a connexion has been used, release it when the request has been processed Additionally, is there something important I should know about it ? When I see some comments like "Be careful when using a QueuePool with use_threadlocal=True, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it.", what does it mean exactly ? Thanks. -- Pierre
Well. I worked a little more. In fact, using a connection manager was probably not a good idea as this should be the template context. Additionally, opening a connection for each thread is not really a big deal. Opening a connection per request would be. I ended up with just pycassa.connect_thread_local() in app_globals, and there I go.
Cassandra
3,671,535
10
I'd like to use Cassandra to store a counter. For example how many times a given page has been viewed. The counter will never decrement. The value of the counter does not need to be exact but it should be accurate over time. My first thought was to store the value as a column and just read the current count, increment it by one and then put it back in. However if another operation is also trying to increment the counter, I think the final value would just be the one with the latest timestamp. Another thought would be to store each page load as a new column in a CF. Then I could just run get_count() on that key and get the number of columns. Reading through the documentation, it appears that it is not a very efficient operation at all. Am I approaching the problem incorrectly?
Counters have been added to Cassandra 0.8 Use the incr method increment the value of a column by 1. [default@app] incr counterCF [ascii('a')][ascii('x')]; Value incremented. [default@app] incr counterCF [ascii('a')][ascii('x')]; Value incremented. Describe here: http://www.jointhegrid.com/highperfcassandra/?p=79 Or it can be done programatically CounterColumn counter = new CounterColumn(); ColumnParent cp = new ColumnParent("page_counts_by_minute"); counter.setName(ByteBufferUtil.bytes(bucketByMinute.format(r.date))); counter.setValue(1); c.add(ByteBufferUtil.bytes( bucketByDay.format(r.date)+"-"+r.url) , cp, counter, ConsistencyLevel.ONE); Described here: http://www.jointhegrid.com/highperfcassandra/?cat=7
Cassandra
3,558,844
10
Heh, I'm using cf.insert(uuid.uuid1().bytes_le, {'column1': 'val1'}) (pycassa) to create a TimeUUID for Cassandra, but getting the error InvalidRequestException: InvalidRequestException(why='UUIDs must be exactly 16 bytes') It doesn't work with uuid.uuid1() uuid.uuid1().bytes str(uuid.uuid1()) either. What's the best way to create a valid TimeUUID to use with the CompareWith="TimeUUIDType" flag? Thanks, Henrik
Looks like you are using the uuid as the row key and not the column name. The 'compare_with: TimeUUIDType' attribute specifies that the column names will be compared with using the TimeUUIDType, i.e it tells Cassandra how to sort the columns for slicing operations Have you considered using any of the high level python clients? E.g. Tradedgy, Lazy Boy, Telephus or Pycassa
Cassandra
3,240,267
10
I'm fairly new to Apache Cassandra and nosql in general. In SQL I can do aggregate operations like: SELECT country, sum(age) / count(*) AS averageAge FROM people GROUP BY country; This is nice because it is calculated within the DB, rather than having to move every row in the 'people' table into the client layer to do the calculation. Is this possible in Apache Cassandra? How?
Cassandra is primarily a mechanism that supports fast writes and look-ups. There is no support for calculations like aggregates in SQL since it is not designed for that. I would suggest reading of popular Cassandra use-cases to get a better insight :) I have bookmarked some articles on my delicious page. Here is the link: http://delicious.com/vibhutesagar/cassandra
Cassandra
3,052,952
10
If I delete every keys in a ColumnFamily in a Cassandra db using remove(key), then if I use get_range_slices, rows are still there but without columns. How could I remove entire rows?
Why do deleted keys show up during range scans? Because get_range_slice says, "apply this predicate to the range of rows given," meaning, if the predicate result is empty, we have to include an empty result for that row key. It is perfectly valid to perform such a query returning empty column lists for some or all keys, even if no deletions have been performed.
Cassandra
2,981,483
10
I keep reeding those articles from different sources that big sites are switching from memcache to cassandra. Coming from a mySQL background, I'll get a slight headache trying to see the pros/cons when compared to each other. Can you help me out to learn more about this?
It would be silly to replace memcached as a cache with Cassandra in most situations. What companies like Digg are doing is, replacing the database+memcached pair with Cassandra: Cassandra provides both durable storage, and an integrated, high performance cache (the "row cache"). This prevents problems like memcached being populated from an out of date slave (what Twitter calls "potential consistency") and simplifies cluster management.
Cassandra
2,864,970
10
Is there a stable Cassandra library for Erlang? I can't seem to find one
I faced the same issue. After benchmarking most of all Cassandra drivers available I've decided to start a new driver Erlcass based on datastax cpp driver. The datastax cpp driver has incredible performances and it's fully async. From my tests on a cluster where other erlang drivers couldn't reach more than 10k reads/second with datastax one I was able to get over 60k/s. There is a slightly difference between the Datastax driver and Erlcass but still I was able to reach over 50k reads/s in the same scenarios. Most of overhead is coming from converting the data into erlang terms and the other way around. It's work in progress to improve the interface and performances. Currently non prepared statements needs some changes.
Cassandra
2,697,101
10