Quantcast
Channel: DataStax Support Forums » Recent Topics
Viewing all 387 articles
Browse latest View live

tambalavanar on "Java code to perform CFS file operations from remote system is not working"

$
0
0

I'm writing a java program to read & write files to CFS from a remote system (non DSE machine). As suggested in the DataStax Site, I wrote the following piece of code.


import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.security.UserGroupInformation;

import com.datastax.bdp.hadoop.cfs.CassandraFileSystem;

public class CassandraFileHelper {

public static void main(String[] args) throws Exception {

FSDataOutputStream o = null;
CassandraFileSystem cfs = null;
String content = "some text content..";

try {
System.setProperty("cassandra.config", "conf/cassandra.yaml");
System.setProperty("dse.config", "conf/dse.yaml");

Configuration conf = new Configuration();
conf.addResource(new Path("conf/core-site.xml"));

UserGroupInformation.createUserForTesting("unixuserid", new String[] { "usergroupname" });
UserGroupInformation.setConfiguration(conf);

cfs = new CassandraFileSystem();
cfs.initialize(URI.create("cfs://hostname:9160/"), conf);

o = cfs.create(new Path("/folder/testfile.txt"), true);
o.write(content.getBytes());
o.flush();

} catch (Exception err) {
System.out.println("Error: " + err.toString());
} finally {
if (o != null)
o.close();
if (cfs != null)
cfs.close();
}
}
}


I've included the listed configuration files and jar from DSE package.

  • cassandra.yaml
  • core-site.xml
  • dse.yaml
  • cassandra-all-1.0.10.jar
  • cassandra-clientutil-1.0.10.jar
  • cassandra-thrift-1.0.10.jar
  • commons-cli-1.1.jar
  • commons-codec-1.2.jar
  • commons-configuration-1.6.jar
  • commons-lang-2.4.jar
  • commons-logging-1.1.1.jar
  • compress-lzf-0.8.4.jar
  • dse.jar
  • guava-r08.jar
  • hadoop-core-1.0.2-dse-20120707.200359-5.jar
  • libthrift-0.6.1.jar
  • log4j-1.2.16.jar
  • slf4j-api-1.6.1.jar
  • snakeyaml-1.6.jar
  • snappy-java-1.0.4.1.jar
  • speed4j-0.9.jar

When I run the program, I get the following error

org.apache.thrift.TApplicationException: Internal error processing batch_mutate

I copied all the config files from a DSE machine and when I added them I get the following error.

Cannot locate conf/cassandra.yaml
Fatal configuration error; unable to start server. See log for stacktrace.

Could anyone please validate my approach and let me know whether this is possible?
Thanks.


bryan on "Opscenter v3.2.0 bug with DSE instance"

$
0
0

I've added a performance chart to my dashboard to look at Cassandra JVM memory on one of my SOLR instances and when you startup the opscenter-agent you see this in the agent.log file and the chart shows up with "NO DATA".

ERROR [jmx-metrics-3] 2013-07-18 16:26:32,781 Error getting Solr metrics
java.lang.ClassCastException: java.lang.Double cannot be cast to java.lang.String
at opsagent.util$ensure_int.invoke(util.clj:89)
at opsagent.jmx_metrics$process_metric_map.invoke(jmx_metrics.clj:57)
at opsagent.jmx_metrics$fetch_metric$fn__3575.invoke(jmx_metrics.clj:94)
at clojure.core$map$fn__4207.invoke(core.clj:2487)
at clojure.lang.LazySeq.sval(LazySeq.java:42)
at clojure.lang.LazySeq.seq(LazySeq.java:60)
at clojure.lang.Cons.next(Cons.java:39)
at clojure.lang.RT.next(RT.java:598)
at clojure.core$next.invoke(core.clj:64)
at clojure.core.protocols$fn__6034.invoke(protocols.clj:146)
at clojure.core.protocols$fn__6005$G__6000__6014.invoke(protocols.clj:19)
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31)
at clojure.core.protocols$fn__6026.invoke(protocols.clj:54)
at clojure.core.protocols$fn__5979$G__5974__5992.invoke(protocols.clj:13)
at clojure.core$reduce.invoke(core.clj:6177)
at clojure.core$into.invoke(core.clj:6229)
at opsagent.jmx_metrics$fetch_metric.doInvoke(jmx_metrics.clj:93)
at clojure.lang.RestFn.invoke(RestFn.java:439)
at opsagent.jmx_metrics$cf_metric_helper$fn__3603$fn__3604.invoke(jmx_metrics.clj:118)
at opsagent.jmx$create_jmx_pool$wrapper__1079.invoke(jmx.clj:188)
at opsagent.jmx_metrics$cf_metric_helper$fn__3603.invoke(jmx_metrics.clj:118)
at clojure.core$map$fn__4207.invoke(core.clj:2487)
at clojure.lang.LazySeq.sval(LazySeq.java:42)
at clojure.lang.LazySeq.seq(LazySeq.java:60)
at clojure.lang.RT.seq(RT.java:484)
at clojure.core$seq.invoke(core.clj:133)
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30)
at clojure.core.protocols$fn__6026.invoke(protocols.clj:54)
at clojure.core.protocols$fn__5979$G__5974__5992.invoke(protocols.clj:13)
at clojure.core$reduce.invoke(core.clj:6177)
at clojure.core$into.invoke(core.clj:6229)
at opsagent.jmx_metrics$cf_metric_helper.invoke(jmx_metrics.clj:118)
at opsagent.jmx_metrics$start_pool$fn__3640.invoke(jmx_metrics.clj:182)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

bryan on "Error running portfolio demo hive/hadoop job example with DSE 3.1.0 on CentOS/Linux"

$
0
0

http://www.datastax.com/documentation/gettingstarted/index.html#getting_started/gettingStartedDemoPortfolio_t.html

dse hive -f /usr/share/dse-demos/portfolio_manager/10_day_loss.q

MapReduce Total cumulative CPU time: 5 seconds 430 msec
Ended Job = job_201307181430_0007 with errors
Error during job, obtaining debugging information...
Examining task ID: task_201307181430_0007_m_000005 (and more) from job job_201307181430_0007
Exception in thread "Thread-14" java.lang.RuntimeException: Error while reading from task log url
at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:240)
at org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:227)
at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:92)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Server returned HTTP response code: 400 for URL: http://<myhost>:50060/tasklog?taskid=attempt_201307181430_0007_m_000000_2&start=-8193
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1625)
at java.net.URL.openStream(URL.java:1037)
at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:192)
... 3 more
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:

Any ideas?

bryan on "Cassandra table with composite partition key + wide row to HIVE external table Mapping issue?"

$
0
0

I've recently upgraded our evaluation of Datastax to 3.1.0 and started messing around with it again and I'm running into a snag that I was helping you might be able to run this by a technical resource?

If we have a CQL3 table like this:

CREATE TABLE "test" (
field1 text,
field2 text,
part int,
ts timeuuid,
key1 text,
key2 text,
key3 text,
key4 text,
key5 text,
value int,
PRIMARY KEY ((field1, field2, part), ts)
)

Is it possible to create an external HIVE table that maps to this cassandra table that has a partition key composed of 3 columns + has a wide row with data collected over a time series?

I've tried several combinations of this HIVE table with no success:

CREATE EXTERNAL TABLE test (field1 string, field2 string, part int, ts binary, key1 string, key2 string, key3 string, key4 string, key5 string, value int) STORED BY 'org.apache.hadoop.hive.cassandra.cql3.CqlStorageHandler' WITH SERDEPROPERTIES ("cassandra.ks.name" = "test", "cassandra.columns.mapping" = ":key,:column,:column,:column,:column,:column,:column,:column,:column,:column");

It creates the table fine, but it fails when I run a simple query like this:

select count(*) from test;

with an error like this:

java.io.IOException: java.io.IOException: java.lang.RuntimeException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:243)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:522)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:197)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:260)
Caused by: java.io.IOException: java.lang.RuntimeException
at org.apache.hadoop.hive.cassandra.cql3.input.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:89)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:240)
... 9 more
Caused by: java.lang.RuntimeException
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:646)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.<init>(CqlPagingRecordReader.java:284)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:150)
at org.apache.hadoop.hive.cassandra.cql3.input.CqlHiveRecordReader.initialize(CqlHiveRecordReader.java:91)
at org.apache.hadoop.hive.cassandra.cql3.input.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:83)
... 10 more
Caused by: InvalidRequestException(why:line 1:104 no viable alternative at input ':')
at org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
at org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:591)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:616)
... 14 more

ken.hancock@schange.com on "Splitting to two datacenters"

$
0
0

I have a single 6-node datacenter right now, my main keyspace running replication_strategy Solr:6 and DSE 3.0.1.

I'd like to split those to better balance my workloads and I'm stuck on the actual procedure -- to start I want to move to Cassandra:3, Solr:3 and then scale each from there.

http://www.datastax.com/docs/datastax_enterprise2.2/deploy/workload_reprovisioning (DSE 3.0 link is broken)

States that solr nodes can't be reprovisioned which seems to be accurate as when I started up dse without the -s flag solr was turned on anyway.

My ring currently shows:
n0 Solr rack1 Up Normal 47.8 GB 16.67% 0
n1 Solr rack1 Up Normal 75.47 GB 16.67% 28356863910078203714492389662765613056
n2 Solr rack1 Up Normal 68.9 GB 16.67% 56713727820156407428984779325531226112
n3 Solr rack1 Up Normal 68.36 GB 16.67% 85070591730234615865843651857942052864
n4 Solr rack1 Up Normal 3.85 GB 16.67% 113427455640312814857969558651062452224
n5 Solr rack1 Up Normal 4.21 GB 16.67% 141784319550391032739561396922763706368

I thought I could essentially:
1. update replication stragegy
2. turn off solr on 3 nodes
3. generate new tokens for two datacenters
4. use nodetool move to move each node around the ring
5. repair each node
6. clean each node

Since #2 didn't work, what's the actual sequence?

P.S. any way to alter my profile nickname to not use my email address so it doesn't flow into the text of my posting?

aeham on "Configuring Solr for an existing column family"

$
0
0

Hi,

All the DSE search examples I've seen, and followed, demonstrate the creation of a Solr schema which DSE subsequently takes and maps into a new Cassandra column family.
What I'm wondering is whether the opposite is possible. i.e, can one take an existing pure-Cassandra column family and add a Solr core for it?

Regards,
Aeham

darshanmeel on "Designing a solution"

$
0
0

Hi
I have a design question. Suppose you have Order and OrderDetails tables as in RDBMS.But Suppose I have a big text or varchar column OrderDetailDesc which is say like xml column in OrderDetail and it contains details like ProductName,ProductDescriptions,OrderAmount,rate etc. Thus the size of the this column is quite large say on average 10-15KB and some are even 200KB in size. There are millions of rows in OrderDetails and thus causing the size of the table to be in 100's of GB's. It is causing unnecessarily large Database on RDBMS. Thus I have decided that I will move the OrderDetailDesc column to Cassandra. I will be using version 1.2. Now I have decided two options for this.

1. Create table with OrderId,OrderDetailId and OrderDetailDesc columns with composite primary key on OrderId and OrderDetailId. Thus sharding happens on the OrderId column and index order will be based on OrderDetailId.

2. Create table with OrderId and map column where OrderdetailId will map with OrderDetailDesc.

Now to give you more information. There will be around 3 million insert\update in this table. However, Inserts will be far too much than update.
There will be around 500 or say 1000 select based on OrderId which brings all the orderDetails.
There will be around 100 select based on OrderId and OrderDetailId.

I would like to know which approach would be better. I am leaning towards approach 1. However, I want to know about map columns. Do the whole map column will be read even if a single orderdetailid is retrieved or updated?

In approach 1 OrderId will be repeated for each OrderDetailId. Is the space used in approach 1 will be more than approach 2?

Please let me know if I have not made myself clear and you need any other information.

Also, please note that this scenario I have just fabricated and in my case the OrderDetailDesc column can not be normalized further as some of you may point to that.

jpayne97 on "Error saving column family changes in opscenter"

$
0
0

Using opscenter 3.2.0 and cassandra 3.1.0
1. Go into opscenter
2. Click Schema
3. click on any keyspace
4. click on any column family
5. properties -> edit
6. click save, don't even need to change anything. I will always see this.
Error
Error saving column family: required argument is not a float

I have tried adding a .0 to every number, but no clue what is causing this.
I'm in the latest Chrome.


vs on "'DateType' column name shows incorrect TimeZone info in OpsCenter"

$
0
0

Hi,

My column family uses 'DateType' as comparator and a sample date from CLI is as follows:
RowKey: 0bc32ce7-0059-3cd4-9ba8-8ace2cb9ac3e:2013-07-22
=> (column=2013-07-22 22:58:45+0530, value=67.07788719634536, timestamp=1374514121291)

However, the OpsCenter result for the same RowKey is:
2013-07-22 22:58:45 UTC 67.0778871963

The timezone shown in CLI is +0530 but opsCenter shows the correct value of timestamp but with 'UTC' which is confusing. Is this a problem with OpsCenter or am I missing something?

Thanks,
VS

jpayne97 on "After upgrading to 3.1 we now get this error about shards in solr"

$
0
0

java.io.IOException: Unavailable shards for ranges: [(0,56713727820156410577229101238628035242]]

We have 3 nodes in 1 cluster for this.

The same configs worked in 3.0.2

aldep on "Configuring Cassandra with private IP for internode communications"

$
0
0

am trying to create a Cassandra cluster. For inter-node communications, on each node, I am using a separate interface with an internal IP address that is not accessible form outside. In addition each machine has an interface that has an external IP visible from outside.

Cluster works fine when a client can use internal addresses. But when I am trying to connect to a node using an external address, the connection itself works, but cluster is described to a client using internal addresses. As a result, client fails because it cannot connect to Cassandra nodes using reported internal addresses.

Is there a way to make Cassandra cluster to report DNS names (or external IPs) of the nodes instead their internal IP addresses?

darshanmeel on "Update VS Insert"

$
0
0

I would like to know if I could use just the update statement form client to insert/update the data into the table instead of using Insert and then update if the key exists. What are the performance problems in using update or insert/update?

376905351_qq.com on "ERROR cfs.CassandraFileSystemRules: Loading path rules failed for: cfs UnavailableException()"

$
0
0

when i create table1 (STORED AS textfile) in hive ,
then i select from table1 into table2(STORED AS SEQUENCEFILE),
when the job complete 20%, the result become fail,
and then i use "dse hadoop fs -ls /" has problem ,and below
why and how can i restore?
3/07/22 16:23:01 ERROR cfs.CassandraFileSystemRules: Loading path rules failed for: cfs
UnavailableException()
at org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12924)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
at org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.datastax.bdp.util.CassandraProxyClient.invokeDseClient(CassandraProxyClient.java:640)
at com.datastax.bdp.util.CassandraProxyClient.invoke(CassandraProxyClient.java:616)
at $Proxy7.get_range_slices(Unknown Source)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemRules.readRulesTable(CassandraFileSystemRules.java:131)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemRules.loadRules(CassandraFileSystemRules.java:104)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemRules.<init>(CassandraFileSystemRules.java:66)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.initialize(CassandraFileSystemThriftStore.java:284)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystem.initialize(CassandraFileSystem.java:73)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.fs.FsShell.init(FsShell.java:82)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:1745)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)
ls: could not get get listing for 'cfs://xxx.xx.xx.xx/' : java.lang.RuntimeException: UnavailableExceptio

ken.hancock@schange.com on "Secondary Index on converted Solr node?"

$
0
0

We recently split our 6-node solr datacenter into a no-Solr Cassandra datacenter and a Solr datacenter (dse delegate snitch). I'm noticing high load on one of the nodes in the Cassandra DC and according to OpsCenter, it's rebuilding the secondary index.

The only secondary index should be Solr indexes which I wouldn't think it should be rebuilding. Should it?

Does something need to be done to scrub the secondary indexes or is this expected behavior?

aeham on "Solr and Cassandra's composite keys"


demouser on "How to filter the record by using "order by DESC" and "limit 10""

$
0
0

How to filter the record by using "order by DESC" and "limit 10"
Whether this will work in cassandra with JPA ?

ashutosh on "HOW TO INSERT Double Column value"

$
0
0

Dear All,

I am trying to insert records in my cassandra (name = address) table using Hector lib. I am getting following runtime exception. Please give me some guidance.

Please refer ...

My Cassandra Table :
-----------------------------
CREATE TABLE address (
KEY text PRIMARY KEY,
lon double,
address text,
lat double
) WITH
comment='' AND
comparator=text AND
read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
default_validation=blob AND
min_compaction_threshold=4 AND
max_compaction_threshold=32 AND
replicate_on_write='true' AND
compaction_strategy_class='SizeTieredCompactionStrategy' AND
compression_parameters:sstable_compression='SnappyCompressor';

CREATE INDEX lon_idx ON address (lon);
CREATE INDEX addess_idx ON address (address);
CREATE INDEX lat_idx ON address (lat);

My Java ( Hector ) Code :
--------------------------------
Mutator<String> mutator = HFactory.createMutator(keyspace, StringSerializer.get());
mutator.addInsertion((lat + "_" + lon), "address", HFactory.createColumn("lat", lat, StringSerializer.get(), DoubleSerializer.get()))
.addInsertion((lat + "_" + lon), "address", HFactory.createColumn("lon", lon, StringSerializer.get(), DoubleSerializer.get()))
.addInsertion((lat + "_" + lon), "address", HFactory.createStringColumn("address", addr));

mutator.execute();

Exception :
--------------

me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:Expected 4 or 0 byte int (5))
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:45)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:264)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)

amey on "DSC 1.2.6 node restart issue"

$
0
0

Hello Everyone,

I am facing issues with restarting a node or entire cluster with DSC 1.2.6. Nothing was changed before restarting the node, but it seems that it is unable to join the cluster once restarted.
Here is a snapshot of logs :

INFO 20:31:54,488 Writing Memtable-schema_keyspaces@2087463062(251/251 serializ ed/live bytes, 8 ops)
INFO 20:31:54,499 Completed flushing /mnt/cassandra/data/system/schema_keyspace s/system-schema_keyspaces-ic-6-Data.db (220 bytes) for commitlog position Replay Position(segmentId=1374611514175, position=142)
INFO 20:31:54,500 Writing Memtable-schema_columns@551341995(24717/24717 seriali zed/live bytes, 398 ops)
INFO 20:31:54,560 Completed flushing /mnt/cassandra/data/system/schema_columns/ system-schema_columns-ic-3-Data.db (4305 bytes) for commitlog position ReplayPos ition(segmentId=1374611514175, position=142)
INFO 20:31:54,561 Writing Memtable-schema_columnfamilies@1144051773(22187/22187 serialized/live bytes, 369 ops)
INFO 20:31:54,577 Completed flushing /mnt/cassandra/data/system/schema_columnfa milies/system-schema_columnfamilies-ic-9-Data.db (4594 bytes) for commitlog posi tion ReplayPosition(segmentId=1374611514175, position=142)
INFO 20:31:54,578 Log replay complete, 13 replayed mutations
INFO 20:31:54,818 Cassandra version: 1.2.6
INFO 20:31:54,819 Thrift API version: 19.36.0
INFO 20:31:54,819 CQL supported versions: 2.0.0,3.0.4 (default: 3.0.4)
INFO 20:31:54,852 Loading persisted ring state
INFO 20:31:54,871 Starting up server gossip
INFO 20:31:54,878 Enqueuing flush of Memtable-local@32685607(251/251 serialized /live bytes, 9 ops)
INFO 20:31:54,879 Writing Memtable-local@32685607(251/251 serialized/live bytes , 9 ops)
INFO 20:31:54,888 Completed flushing /mnt/cassandra/data/system/local/system-lo cal-ic-12-Data.db (237 bytes) for commitlog position ReplayPosition(segmentId=13 74611514175, position=53949)
INFO 20:31:54,966 Starting Messaging Service on port 7000
java.lang.NullPointerException
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageServ ice.java:728)
at org.apache.cassandra.service.StorageService.initServer(StorageService .java:554)
at org.apache.cassandra.service.StorageService.initServer(StorageService .java:451)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.ja va:342)
at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.jav a:375)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java :212)
Cannot load daemon
Service exit with a return value of 3

Let me know if anyone else has faced this issue.

Amey

asirghi on "CassandraFS - Required field 'storage_type' was not present!"

$
0
0

Hi!
When I try to read an existent text file from cassandra FS, I obtain this exception:

java.io.IOException: org.apache.thrift.TApplicationException: Required field 'storage_type' was not present! Struct: get_cfs_sblock_args(caller_host_name:localhost, block_id:32 31 32 61 65 31 64 30 66 34 34 37 31 31 65 32 30 30 30 30 32 34 32 64 35 30 63 66 31 66 62 37, sblock_id:32 31 32 62 32 66 66 30 66 34 34 37 31 31 65 32 30 30 30 30 32 34 32 64 35 30 63 66 31 66 62 37, offset:0, storage_type:null, keyspace:cfs)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.retrieveSubBlock(CassandraFileSystemThriftStore.java:480)
at com.datastax.bdp.hadoop.cfs.CassandraSubBlockInputStream.subBlockSeekTo(CassandraSubBlockInputStream.java:145)
at com.datastax.bdp.hadoop.cfs.CassandraSubBlockInputStream.read(CassandraSubBlockInputStream.java:95)
at com.datastax.bdp.hadoop.cfs.CassandraInputStream.read(CassandraInputStream.java:149)
at java.io.DataInputStream.read(DataInputStream.java:132)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.BufferedReader.fill(BufferedReader.java:136)
at java.io.BufferedReader.readLine(BufferedReader.java:299)
at java.io.BufferedReader.readLine(BufferedReader.java:362)
at com.isightpartners.tools.hdfs.CassFSTest.readFileContent(CassFSTest.java:77)
at com.isightpartners.tools.hdfs.CassFSTest.testWriteReadString(CassFSTest.java:52)

the file exists, and I can do "dse hadoop fs -copyToLocal ..." on it

To read it I use something like:

FSDataInputStream inputStream = fs.open(new Path(filePath));
BufferedReader br = new BufferedReader(new InputStreamReader(inputStream));

...
content+=br.readLine();
...

DSE 3.0.1 with included libs + hadoop-core-1.2.0

mschenk74 on "Problems after upgrading 5 node cluster from version 3.0.1 to 3.1.0"

$
0
0

Q1: Last week we started to upgrade our cluster (5 realt-time nodes) from 3.0.1 to 3.1.0. After all nodes were upgraded we were not able to see our keyspaces anymore.
Did we miss some configuration settings or are there known issues in the 3.1.0 release?

Q2: Today, I saw that there is a new version (3.1.1) available in the repositiories. Is there any documentation on the differences between 3.1.0 and 3.1.1? Should we switch to 3.1.1?

The cluster is used for development and evaluation purposes only at the moment

Viewing all 387 articles
Browse latest View live




Latest Images