Quantcast
Channel: DataStax Support Forums » Recent Topics
Viewing all 387 articles
Browse latest View live

palgy on "unable to install Install DataStax Enterprise on Red Hat Enterprise Linux 5.8"

$
0
0

Hi ,

I have some issue with the installation dse-full ... anyone can help ?

See below:

[root@localhost Server]# java -version
java version "1.6.0_43"
Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
[root@localhost Server]# yum install jna
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
Setting up Install Process
Package jna-3.4.0-4.el5.x86_64 already installed and latest version
Nothing to do
[root@localhost Server]# yum install dse-full
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package dse-full.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: dse-libhive = 3.0.2 for package: dse-full
--> Processing Dependency: dse-libcassandra = 3.0.2 for package: dse-full
--> Processing Dependency: dse-libmahout = 3.0.2 for package: dse-full
--> Processing Dependency: dse-libpig = 3.0.2 for package: dse-full
--> Processing Dependency: dse-demos = 3.0.2 for package: dse-full
--> Processing Dependency: dse-libtomcat = 3.0.2 for package: dse-full
--> Processing Dependency: dse-liblog4j = 3.0.2 for package: dse-full
--> Processing Dependency: dse-libsqoop = 3.0.2 for package: dse-full
--> Processing Dependency: dse-libsolr = 3.0.2 for package: dse-full
--> Processing Dependency: dse-libhadoop = 3.0.2 for package: dse-full
--> Running transaction check
---> Package dse-demos.noarch 0:3.0.2-1 set to be updated
---> Package dse-libcassandra.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: python-cql >= 1.4.0 for package: dse-libcassandra
--> Processing Dependency: java >= 1.6.0 for package: dse-libcassandra
--> Processing Dependency: python(abi) >= 2.6 for package: dse-libcassandra
---> Package dse-libhadoop.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: dse-libhadoop-native = 3.0.2 for package: dse-libhadoop
--> Processing Dependency: java >= 1.6.0 for package: dse-libhadoop
---> Package dse-libhive.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libhive
---> Package dse-liblog4j.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-liblog4j
---> Package dse-libmahout.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libmahout
---> Package dse-libpig.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libpig
---> Package dse-libsolr.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libsolr
---> Package dse-libsqoop.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libsqoop
---> Package dse-libtomcat.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libtomcat
--> Running transaction check
---> Package dse-libcassandra.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libcassandra
---> Package dse-libhadoop.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libhadoop
---> Package dse-libhadoop-native.x86_64 0:3.0.2-1 set to be updated
---> Package dse-libhive.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libhive
---> Package dse-liblog4j.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-liblog4j
---> Package dse-libmahout.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libmahout
---> Package dse-libpig.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libpig
---> Package dse-libsolr.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libsolr
---> Package dse-libsqoop.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libsqoop
---> Package dse-libtomcat.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libtomcat
---> Package python26.x86_64 0:2.6.8-2.el5 set to be updated
--> Processing Dependency: libpython2.6.so.1.0()(64bit) for package: python26
---> Package python26-cql.noarch 0:1.4.0-2 set to be updated
--> Processing Dependency: python26-thrift for package: python26-cql
--> Running transaction check
---> Package dse-libcassandra.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libcassandra
---> Package dse-libhadoop.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libhadoop
---> Package dse-libhive.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libhive
---> Package dse-liblog4j.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-liblog4j
---> Package dse-libmahout.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libmahout
---> Package dse-libpig.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libpig
---> Package dse-libsolr.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libsolr
---> Package dse-libsqoop.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libsqoop
---> Package dse-libtomcat.noarch 0:3.0.2-1 set to be updated
--> Processing Dependency: java >= 1.6.0 for package: dse-libtomcat
---> Package python26-libs.x86_64 0:2.6.8-2.el5 set to be updated
---> Package python26-thrift.x86_64 0:0.7.0-2 set to be updated
--> Finished Dependency Resolution
dse-libtomcat-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libtomcat-3.0.2-1.noarch (datastax)
dse-libpig-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libpig-3.0.2-1.noarch (datastax)
dse-libsqoop-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libsqoop-3.0.2-1.noarch (datastax)
dse-libsolr-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libsolr-3.0.2-1.noarch (datastax)
dse-libhadoop-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libhadoop-3.0.2-1.noarch (datastax)
dse-libmahout-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libmahout-3.0.2-1.noarch (datastax)
dse-libcassandra-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libcassandra-3.0.2-1.noarch (datastax)
dse-libhive-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-libhive-3.0.2-1.noarch (datastax)
dse-liblog4j-3.0.2-1.noarch from datastax has depsolving problems
--> Missing Dependency: java >= 1.6.0 is needed by package dse-liblog4j-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libsqoop-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libmahout-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libhadoop-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-liblog4j-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libhive-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libsolr-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libcassandra-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libtomcat-3.0.2-1.noarch (datastax)
Error: Missing Dependency: java >= 1.6.0 is needed by package dse-libpig-3.0.2-1.noarch (datastax)
You could try using --skip-broken to work around the problem
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest


sthibault on "Hadoop with Kerberos"

$
0
0

Map reduce jobs are not working after kerberos integration. The dse hadoop jar command just hangs after outputing:
13/07/02 14:11:22 INFO security.TokenCache: Got dt for cfs://t1-node4.mhintern.com/tmp/hadoop-cassandra/mapred/staging/sthibault/.staging/job_201307021350_0005;uri=10.0.2.79:0;t.service=
13/07/02 14:11:22 INFO security.TokenCache: Got dt for /tmp/reduced;uri=10.0.2.77:0;t.service=

There is no information in /var/log/cassandra/system.log and dse hadoop job -list doesn't list any jobs.

Any suggestions?

tanzir on "Failed to start opscenter"

$
0
0

Hello everyone,
Though I have been using Opscenter for a long time but this is 1st time I am trying to install it. I am having some issues while starting-up the opscenter.

OS Version: CentOS release 6.3 (Final)
Amazon EC2 Instance

I have already installed Cassandra cluster with 3 nodes and they are working fine(I checked with nodetool). Then I wanted to use another node just for Opscenter so I installed opscenter from the rpm: opscenter-free-3.0.1-1.noarch.rpm.

Here is the setting which I used:

# opscenterd.conf

[webserver]
port = 8888
interface = 111.111.111.111 (public ip address)
# The following settings can be used to enable ssl support for the opscenter
# web application. Change these values to point to the ssl certificate and key
# that you wish to use for your OpsCenter install, as well as the port you would like
# to serve ssl traffic from.
#ssl_keyfile = /var/lib/opscenter/ssl/opscenter.key
#ssl_certfile = /var/lib/opscenter/ssl/opscenter.pem
#ssl_port = 8443
[agents]
use_ssl = false

[logging]
# level may be TRACE, DEBUG, INFO, WARN, or ERROR
level = INFO

[authentication]
# if this file does not exist, there will be no password protection. Use the
# set_passwd.py tool (included with OpsCenter) to set passwords. This property will
# default to /etc/opscenter/.passwd in packaged installations and ./passwds in
# tarball installations.
#passwd_file =

---------------------------------------

When I used this setting and started the service it failed. Here is the log:

File "/usr/share/opscenter/lib/py-redhat/2.6/shared/amd64/twisted/application/internet.py", line 110, in startService
self._port = self._getPort()
File "/usr/share/opscenter/lib/py-redhat/2.6/shared/amd64/twisted/application/internet.py", line 131, in _getPort
'listen%s' % (self.method,))(*self.args, **self.kwargs)
File "/usr/share/opscenter/lib/py-redhat/2.6/shared/amd64/twisted/internet/posixbase.py", line 419, in listenTCP
p.startListening()
File "/usr/share/opscenter/lib/py-redhat/2.6/shared/amd64/twisted/internet/tcp.py", line 857, in startListening
raise CannotListenError, (self.interface, self.port, le)
twisted.internet.error.CannotListenError: Couldn't listen on 111.111.111.111:8888: [Errno 99] Cannot assign requested address.

----------------------------------

But when I used private IP address (interface = 10.0.0.80) it didn't show that error in the log, but I'm not able to see opscenter UI (http://hostname/public-ip:8888).

From the log:

us-east-1b-hadoop-client [root:opscenter]$ tail opscenterd.log
2013-03-07 15:29:57-0500 [] INFO: opscenterd.WebServer.OpsCenterdWebServer starting on 8888
2013-03-07 15:29:57-0500 [] INFO: Starting factory <opscenterd.WebServer.OpsCenterdWebServer instance at 0x32b87a0>
2013-03-07 15:29:57-0500 [] INFO: morbid.morbid.StompFactory starting on 61619
2013-03-07 15:29:57-0500 [] INFO: Starting factory <morbid.morbid.StompFactory instance at 0x32dab00>
2013-03-07 15:29:57-0500 [] INFO: Configuring agent communication with ssl support disabled.
2013-03-07 15:29:57-0500 [] INFO: morbid.morbid.StompFactory starting on 61620
2013-03-07 15:29:57-0500 [] INFO: OS Version: Linux version 2.6.32-279.14.1.el6.x86_64 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Tue Nov 6 23:43:09 UTC 2012
2013-03-07 15:29:57-0500 [] INFO: CPU Info: ['1795.672', '1795.672']
2013-03-07 15:29:57-0500 [] INFO: Mem Info: 7337MB
2013-03-07 15:29:58-0500 [] INFO: Package Manager: Unknown

------------------------------------

Any information will be highly appreciated.

Thanks in advance.
Tanzir

acchen on "Cassandra Node dying, saw OpsCenter thrift operation queue full prior"

$
0
0

We have just moved Cassandra 1.1.7 into production today, but just before that we saw two Cassandra nodes go down with OOM. We saw this error in the past in load tests and have tuned the nofiles accordingly so these should not occur. Also note that this error happened when there was NO load on the infrastructure.

ERROR [Thread-22] 2013-07-08 16:31:50,905 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[Thread-22,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.util.concurrent.ThreadPoolExecutor.addIfUnderCorePoolSize(ThreadPoolExecutor.java:703)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:652)
at org.apache.cassandra.net.MessagingService.receive(MessagingService.java:581)
at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:155)
at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:113)

We could NOT start the Cassandra server back up (kept giving OOM error). Only after we shutdown the OpsCenter (Enterprise 2.1.3) agent were we able to start Cassandra back up, then start the agent back up. Below is the agent.log close to the time of the Cassandra node dying. We are seeing a lot of thrift operation queue full and operations being dropped. We are also NOT using secondary indexes. Any thoughts are welcome, thanks!!

In agent.log:
WARN [pool-4-thread-1] 2013-07-08 16:31:41,395 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,396 367168 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,396 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,396 367169 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,396 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,396 367170 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,397 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,397 367171 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,397 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,397 367172 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 367173 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 367174 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 367175 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 367176 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 367177 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 367178 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 367179 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 367180 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 367181 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 367182 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 367183 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 367184 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 367185 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 367186 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 367187 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 367188 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 367189 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,405 367190 operations dropped so far.
ERROR [Thread-4] 2013-07-08 16:31:45,347 Error when proccessing thrift callme.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level.
ERROR [pool-5-thread-1] 2013-07-08 16:31:47,793 Error connecting via JMX: java.io.IOException: Cannot run program "cat": java.io.IOException: error=11, Resource temporarily unavailable
ERROR [Thread-4] 2013-07-08 16:31:50,348 Error when proccessing thrift callme.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level.
INFO [pool-5-thread-1] 2013-07-08 16:31:52,794 New JMX connection (127.0.0.1:7199)
ERROR [pool-5-thread-1] 2013-07-08 16:31:52,857 Error connecting via JMX: java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused]
WARN [pool-3-thread-4] 2013-07-08 16:31:53,127 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 367191 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 367192 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 367193 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 367194 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 367195 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,130 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,130 367196 operations dropped so far.
ERROR [Thread-4] 2013-07-08 16:31:55,350 Could not flush transport (to be expected if the pool is shutting down) in close for client: CassandraClient<16.211.56.72:9160-3>
WARN [pool-4-thread-1] 2013-07-08 16:31:41,397 367171 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,397 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,397 367172 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 367173 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 367174 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,398 367175 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 367176 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 367177 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,399 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 367178 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 367179 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,400 367180 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 367181 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,401 367182 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 367183 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 367184 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,402 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 367185 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 367186 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,403 367187 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 367188 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 367189 operations dropped so far.
WARN [pool-4-thread-1] 2013-07-08 16:31:41,404 Thrift operation queue is full, discarding thrift operation
WARN [pool-4-thread-1] 2013-07-08 16:31:41,405 367190 operations dropped so far.
ERROR [Thread-4] 2013-07-08 16:31:45,347 Error when proccessing thrift callme.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level.
ERROR [pool-5-thread-1] 2013-07-08 16:31:47,793 Error connecting via JMX: java.io.IOException: Cannot run program "cat": java.io.IOException: error=11, Resource temporarily unavailable
ERROR [Thread-4] 2013-07-08 16:31:50,348 Error when proccessing thrift callme.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level.
INFO [pool-5-thread-1] 2013-07-08 16:31:52,794 New JMX connection (127.0.0.1:7199)
ERROR [pool-5-thread-1] 2013-07-08 16:31:52,857 Error connecting via JMX: java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused]
WARN [pool-3-thread-4] 2013-07-08 16:31:53,127 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 367191 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 367192 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,128 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 367193 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 367194 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,129 367195 operations dropped so far.
WARN [pool-3-thread-4] 2013-07-08 16:31:53,130 Thrift operation queue is full, discarding thrift operation
WARN [pool-3-thread-4] 2013-07-08 16:31:53,130 367196 operations dropped so far.
ERROR [Thread-4] 2013-07-08 16:31:55,350 Could not flush transport (to be expected if the pool is shutting down) in close for client: CassandraClient<16.211.56.72:9160-3>
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:98)
at me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:26)
at me.prettyprint.cassandra.connection.HConnectionManager.closeClient(HConnectionManager.java:311)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:260)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at clj_hector.core$put.doInvoke(core.clj:164)
at clojure.lang.RestFn.invoke(RestFn.java:470)
at opsagent.cassandra$store_rollup.invoke(cassandra.clj:107)
at clojure.lang.AFn.applyToHelper(AFn.java:161)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:540)
at opsagent.cassandra$async_call$fn__582$fn__583.invoke(cassandra.clj:164)
at opsagent.cassandra$process_queue$fn__587.invoke(cassandra.clj:170)
at opsagent.cassandra$process_queue.invoke(cassandra.clj:169)
at opsagent.cassandra$setup_cassandra$fn__595.invoke(cassandra.clj:203)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 19 more
ERROR [Thread-4] 2013-07-08 16:31:55,351 MARK HOST AS DOWN TRIGGERED for host 16.211.56.72(16.211.56.72):9160
ERROR [Thread-4] 2013-07-08 16:31:55,351 Pool state on shutdown: <ConcurrentCassandraClientPoolByHost>:{16.211.56.72(16.211.56.72):9160}; IsActive?: true; Active: 1; Blocked: 0; Idle: 0; NumBeforeExhausted: 0
INFO [Thread-4] 2013-07-08 16:31:55,351 Shutdown triggered on <ConcurrentCassandraClientPoolByHost>:{16.211.56.72(16.211.56.72):9160}
INFO [Thread-4] 2013-07-08 16:31:55,351 Shutdown complete on <ConcurrentCassandraClientPoolByHost>:{16.211.56.72(16.211.56.72):9160}
INFO [Thread-4] 2013-07-08 16:31:55,352 Host detected as down was added to retry queue: 16.211.56.72(16.211.56.72):9160
WARN [Thread-4] 2013-07-08 16:31:55,392 Could not fullfill request on this host CassandraClient<16.211.56.72:9160-3>

Regards,
Alvin

ashutosh on "HOW TO INSERT Double Column value"

$
0
0

Dear All,

I am trying to insert records in my cassandra (name = address) table using Hector lib. I am getting following runtime exception. Please give me some guidance.

Please refer ...

My Cassandra Table :
-----------------------------
CREATE TABLE address (
KEY text PRIMARY KEY,
lon double,
address text,
lat double
) WITH
comment='' AND
comparator=text AND
read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
default_validation=blob AND
min_compaction_threshold=4 AND
max_compaction_threshold=32 AND
replicate_on_write='true' AND
compaction_strategy_class='SizeTieredCompactionStrategy' AND
compression_parameters:sstable_compression='SnappyCompressor';

CREATE INDEX lon_idx ON address (lon);
CREATE INDEX addess_idx ON address (address);
CREATE INDEX lat_idx ON address (lat);

My Java ( Hector ) Code :
--------------------------------
Mutator<String> mutator = HFactory.createMutator(keyspace, StringSerializer.get());
mutator.addInsertion((lat + "_" + lon), "address", HFactory.createColumn("lat", lat, StringSerializer.get(), DoubleSerializer.get()))
.addInsertion((lat + "_" + lon), "address", HFactory.createColumn("lon", lon, StringSerializer.get(), DoubleSerializer.get()))
.addInsertion((lat + "_" + lon), "address", HFactory.createStringColumn("address", addr));

mutator.execute();

Exception :
--------------

me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:Expected 4 or 0 byte int (5))
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:45)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:264)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)

sarobenalt on "Java Driver - CQL 3 - Bound Statements - IN clause"

$
0
0

Hi all!

As the topic indicates, I am trying to figure out how to bind variables for an IN clause in a bound statement when using the Java Driver. In my case, I am building a query that returns a result set containing records that match one of several (but not a fixed number) key values represented by UUIDs. Because the number of key values varies, I was attempting to pass a List<UUID> or an Array of UUIDs to a single bind variable, but received an error indicating that a data type of UUID was expected.

Is there a way to pass a variable-sized list to an IN clause, or should I not be using a bound statement for this query?

jpayne97 on "After upgrading to 3.1 we now get this error about shards in solr"

$
0
0

java.io.IOException: Unavailable shards for ranges: [(0,56713727820156410577229101238628035242]]

We have 3 nodes in 1 cluster for this.

The same configs worked in 3.0.2

sthibault on "DSE 3.1 number of hadoop tasks"

$
0
0

I've setup a fresh DSE 3.1 install on two nodes. Per recommendations the Cassandra node has num_tokens=256 and the Analytics node has num_tokens=1.

When I run a hadoop job or a hive query, there are 258 map tasks started which overwhelms the system and takes a very long time to finish. Why are so many map tasks created? How can I control it?

Thanks,
--Scott


amey on "DSE Search (Solr) time series data"

$
0
0

Hello Everyone,

I am trying to query time series data with DSE 3.1 search and so far haven't been successful with a good data model. I found in DSE docs which says that “Cassandra time series type rows” are not supported in DSE search. Does that mean wide rows aren’t supported ? I assume the time series rows are typically wide rows with variable number of columns per row like (Row Key : 1 day’s data as columns); which means that one row of this data would be a Solr document and the variable no of columns would be dynamic fields in Solr doc. Now the question is there a way to select only certain fields based on the values in those fields ?
Also, are there good design recommendations for query time series data in DSE Search/Solr ?

Amey

ctoomey on "Why different data centers needed to use Solr?"

$
0
0

Per http://www.datastax.com/docs/datastax_enterprise3.0/solutions/dse_search_cluster one cannot run a DSE cluster containing both regular Cassandra nodes and Solr search nodes in the same data center. Why is this?

We are looking at using Solr to provide full-text search capability for data that we're storing in Cassandra. We have multiple physical data centers, but we obviously want to have both Solr nodes and Cassandra nodes in each data center for redundancy and scalability.

thx,
Chris

gdf on "Co-locate Cassandra and Analytics (Hadoop) Servers?"

$
0
0

Is there an 'approved' way to launch multiple Cassandra instances on a single node and have them point to different config files?

I'm specifically looking to set up a cluster with both a Cassandra and Analytics (ie: Hadoop) instance co-located on each node.

I'm sure there's any number of hacky ways I can kludge things to make it work, but is there an accepted way to set up such a configuration?

Andrei Pozolotin on "datastax/community sources location?"

haruska on "new machine reported as old one with no updated stats"

$
0
0

I have a 21 node cluster on ec2 (cass 1.2.5). One node was experiencing intermittent network issues. I rsync'ed the data to a new node and replaced the old one. The new machine took over fine in the ring for the old node.

In OpsCenter, it shows the old IP as back "UP" along with "all 21 agents" connected. I have not installed OpsCenter agent on the new machine. Obviously the stats are not being updated in OpsCenter for the replaced node. It still shows the load and compaction stats for the old machine.

tableau on "Datastax End of Life (support) for specific versions"

$
0
0

Just wondering if anyone knows the End Of Life (Support) for the following versions of Datastax
Datastax Enterprise Edition 2.2
Datastax Enterprise Edition 3.0
We are just trying to update our EoL wiki.
Thanks ahead of time for any assistance
Eric

sgansa on "com.datastax.bdp.hadoop.cfs"

$
0
0

In my Java project, I made the entry in the maven project to leverage CFS methods as:
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>1.0.1</version>
</dependency>

Still, it couldn't locate com.datastax.bdp.hadoop.cfs.CassandraFileSystem class and throws the compilation error.


pgorla on "Zombie Instances Keep Reappearing"

$
0
0

Hi,

I recently tested out the Datastax Enterprise AMI, and set up a 2-node cluster with Solr enabled on both nodes.

Unfortunately, the installation failed with this message:


[INFO] Using instance type: m1.large
[ERROR] Clusters within a VPC are not supported. Aborting installation.

Please verify your settings:
--clustername Test --totalnodes 2 --version enterprise --username **** --password **** --searchnodes 2 --opscenter yes
[ERROR] Exception seen in ds1_launcher.py:
Traceback (most recent call last):
File "/home/ubuntu/datastax_ami/ds1_launcher.py", line 31, in initial_configurations
ds2_configure.run()
File "/home/ubuntu/datastax_ami/ds2_configure.py", line 964, in run
File "/home/ubuntu/datastax_ami/ds2_configure.py", line 55, in exit_path
AttributeError

I set up the nodes in a standard EC2 environment, not a VPC.

Now, I'm trying to shut the nodes down, and I am unable to terminate the nodes completely -- the cluster keeps re-appearing, and adding the same nodes, with the same problem. When I log into the server, I can't shut down DSE because DSE isn't running, but the nodes are still auto-populating.

I've been struggling with this for quite some time; please help me stop these zombie instances.

Thanks in advance.

ken.hancock@schange.com on "Search Request graphs not populated"

$
0
0

None of my search graphs (Search Requests, Search Request Latency) are being populated. Looking through data explorer, I do see data in the rollups60 column family:

192.168.XXX.XXX-group-alerts-getSolrAvgReqPerSec

Our keyspaces use dotted notation ks:group.alerts. Perhaps a parsing error?

Anonymous on "DSE 3.1 number of hadoop tasks"

jmac on "Is Cassandra really only sending one copy of the data between data centers?"

$
0
0

I set up a cluster with two data centers (DC1 and DC2) with 4 nodes in each.

# nodetool status
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns   Host ID                               Rack
UN  10.53.202.141  88.36 MB   128     13.7%  b157269a-e669-4937-bf52-313d82156c0e  RAC1
UN  10.53.202.146  86.95 MB   128     11.6%  6f2e6b4c-0aa6-47c4-93e7-a7b86b039537  RAC1
UN  10.53.202.148  87.51 MB   128     11.2%  747f96b7-af03-4b07-8fea-a85884bb37b0  RAC1
UN  10.53.202.139  85.9 MB    128     12.8%  e3f411cd-d9a8-49a9-b6d7-b25a067932f8  RAC1
Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns   Host ID                               Rack
UN  10.53.202.161  89.57 MB   128     13.0%  54b04b92-2223-4b2a-956c-fd752ab629da  RAC1
UN  10.53.202.162  86.97 MB   128     11.5%  19bbef90-7ffd-47b2-9d04-c65b15bc7c81  RAC1
UN  10.53.202.164  82.3 MB    128     11.5%  8e1938f5-9470-4b44-be81-97578c2e3ebc  RAC1
UN  10.53.202.163  89.85 MB   128     14.7%  e2e5ff66-b808-4b21-b5d1-cf346720187d  RAC1

My test schema is as follows:

CREATE KEYSPACE test WITH replication = {'class':'NetworkTopologyStrategy', 'DC1':3, 'DC2':3};
USE test;
CREATE TABLE session (session_id text PRIMARY KEY, subscriber_id text, ip_address inet, last_update timestamp);

So I should have 3 copies of the data in each data center, but only one copy should be sent from DC1 to DC2. I tried adding a row on DC1 with trace on and I see several messages going to nodes in DC2 where I'd expect to only see one. Am I just misinterpreting the trace output?

cqlsh> use test;
cqlsh:test> insert into session (session_id, ip_address, subscriber_id, last_update) values ('session-100', '10.20.30.40', 'testuser', dateof(now()));

Tracing session: 4d4178f0-ee35-11e2-b099-75526324f72a

 activity                                                                                                                                           | timestamp    | source        | source_elapsed
----------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------+----------------
                                                                                                                                 execute_cql3_query | 12:32:27,136 | 10.53.202.139 |              0
                                                                                                               Message received from /10.53.202.139 | 12:32:26,297 | 10.53.202.163 |             91
                                                                                                        Enqueuing forwarded write to /10.53.202.164 | 12:32:26,300 | 10.53.202.163 |           3319
                                                                                                        Enqueuing forwarded write to /10.53.202.161 | 12:32:26,300 | 10.53.202.163 |           3442
                                                                                                                     Acquiring switchLock read lock | 12:32:26,300 | 10.53.202.163 |           3557
                                                                                                                  Sending message to /10.53.202.164 | 12:32:26,301 | 10.53.202.163 |           3649
                                                                                                                             Appending to commitlog | 12:32:26,301 | 10.53.202.163 |           3683
                                                                                                                         Adding to session memtable | 12:32:26,301 | 10.53.202.163 |           3736
                                                                                                                  Sending message to /10.53.202.161 | 12:32:26,301 | 10.53.202.163 |           4148
                                                                                                               Enqueuing response to /10.53.202.139 | 12:32:26,302 | 10.53.202.163 |           5000
                                                                                                                  Sending message to /10.53.202.139 | 12:32:26,302 | 10.53.202.163 |           5168
                                                                                                               Message received from /10.53.202.163 | 12:32:26,307 | 10.53.202.161 |             89
                                                                                                                     Acquiring switchLock read lock | 12:32:26,308 | 10.53.202.161 |           1265
                                                                                                                             Appending to commitlog | 12:32:26,308 | 10.53.202.161 |           1315
                                                                                                                         Adding to session memtable | 12:32:26,308 | 10.53.202.161 |           1408
                                                                                                               Enqueuing response to /10.53.202.139 | 12:32:26,308 | 10.53.202.161 |           1639
                                                                                                                  Sending message to /10.53.202.139 | 12:32:26,308 | 10.53.202.161 |           1826
                                                                                                               Message received from /10.53.202.163 | 12:32:26,318 | 10.53.202.164 |             83
                                                                                                                     Acquiring switchLock read lock | 12:32:26,320 | 10.53.202.164 |           1793
                                                                                                                             Appending to commitlog | 12:32:26,320 | 10.53.202.164 |           1848
                                                                                                                         Adding to session memtable | 12:32:26,323 | 10.53.202.164 |           5386
                                                                                                               Enqueuing response to /10.53.202.139 | 12:32:26,324 | 10.53.202.164 |           5644
                                                                                                                  Sending message to /10.53.202.139 | 12:32:26,328 | 10.53.202.164 |           9634
 Parsing insert into session (session_id, ip_address, subscriber_id, last_update) values ('session-100', '10.20.30.40', 'testuser', dateof(now())); | 12:32:27,136 | 10.53.202.139 |             70
                                                                                                                                 Peparing statement | 12:32:27,137 | 10.53.202.139 |            366
                                                                                                                  Determining replicas for mutation | 12:32:27,137 | 10.53.202.139 |            758
                                                                                                                Enqueuing message to /10.53.202.163 | 12:32:27,140 | 10.53.202.139 |           3869
                                                                                                                     Acquiring switchLock read lock | 12:32:27,140 | 10.53.202.139 |           3937
                                                                                                                             Appending to commitlog | 12:32:27,140 | 10.53.202.139 |           3966
                                                                                                                         Adding to session memtable | 12:32:27,140 | 10.53.202.139 |           4009
                                                                                                                  Sending message to /10.53.202.146 | 12:32:27,140 | 10.53.202.139 |           4053
                                                                                                                  Sending message to /10.53.202.163 | 12:32:27,140 | 10.53.202.139 |           4176
                                                                                                               Message received from /10.53.202.139 | 12:32:27,141 | 10.53.202.141 |             26
                                                                                                                  Sending message to /10.53.202.141 | 12:32:27,141 | 10.53.202.139 |           4376
                                                                                                                     Acquiring switchLock read lock | 12:32:27,143 | 10.53.202.141 |           1533
                                                                                                               Message received from /10.53.202.146 | 12:32:27,143 | 10.53.202.139 |           null
                                                                                                                             Appending to commitlog | 12:32:27,143 | 10.53.202.141 |           1546
                                                                                                            Processing response from /10.53.202.146 | 12:32:27,143 | 10.53.202.139 |           null
                                                                                                                         Adding to session memtable | 12:32:27,143 | 10.53.202.141 |           1564
                                                                                                               Enqueuing response to /10.53.202.139 | 12:32:27,143 | 10.53.202.141 |           2181
                                                                                                         Sending message to mb-cs-l-1/10.53.202.139 | 12:32:27,144 | 10.53.202.141 |           2347
                                                                                                               Message received from /10.53.202.141 | 12:32:27,144 | 10.53.202.139 |           null
                                                                                                            Processing response from /10.53.202.141 | 12:32:27,144 | 10.53.202.139 |           null
                                                                                                               Message received from /10.53.202.163 | 12:32:27,149 | 10.53.202.139 |           null
                                                                                                            Processing response from /10.53.202.163 | 12:32:27,149 | 10.53.202.139 |           null
                                                                                                               Message received from /10.53.202.161 | 12:32:27,153 | 10.53.202.139 |           null
                                                                                                            Processing response from /10.53.202.161 | 12:32:27,153 | 10.53.202.139 |           null
                                                                                                               Message received from /10.53.202.164 | 12:32:27,160 | 10.53.202.139 |           null
                                                                                                            Processing response from /10.53.202.164 | 12:32:27,160 | 10.53.202.139 |           null
                                                                                                               Message received from /10.53.202.139 | 12:32:27,255 | 10.53.202.146 |             26
                                                                                                                     Acquiring switchLock read lock | 12:32:27,256 | 10.53.202.146 |            817
                                                                                                                             Appending to commitlog | 12:32:27,256 | 10.53.202.146 |            833
                                                                                                                         Adding to session memtable | 12:32:27,256 | 10.53.202.146 |           1020
                                                                                                               Enqueuing response to /10.53.202.139 | 12:32:27,257 | 10.53.202.146 |           1434
                                                                                                         Sending message to mb-cs-l-1/10.53.202.139 | 12:32:27,257 | 10.53.202.146 |           1586
                                                                                                                                   Request complete | 12:32:27,140 | 10.53.202.139 |           4664

nyadav.ait on "java datastax driver EXCEPTION No handler set for stream 0"

$
0
0

i had started using latest cassandra-driver-core-1.0.1.jar from yesterday on latest version of cassandra 1.2.6...i cross checked start_native_transport: true is set in yaml...also my cassandra is configured with rpc_address and listen_adress as computer host name....and with same name i am connected in Client....but it shows this message and after that hangs at .build(); ...

i had also cross checked i had taken all jars i have are as per http://www.datastax.com/documentation/developer/java-driver/1.0/java-driver/reference/settingUpJavaProgEnv_r.html

and i am using JDK 1.6...

Here is message i got :

Jul 17, 2013 11:20:37 AM com.datastax.driver.core.Connection$Dispatcher messageReceived SEVERE: [mlhwlt08/192.168.2.111-1] No handler set for stream 0 (this is a bug, either of this driver or of Cassandra, you should report it). Received message is ROWS [peer(system, peers), org.apache.cassandra.db.marshal.InetAddressType][data_center(system, peers), org.apache.cassandra.db.marshal.UTF8Type][rack(system, peers), org.apache.cassandra.db.marshal.UTF8Type][tokens(system, peers), org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.UTF8Type)][rpc_address(system, peers), org.apache.cassandra.db.marshal.InetAddressType]

| 192.168.2.109 | datacenter1 | rack1 | 000100142d37353634343931333331313737343033343435 | 192.168.2.109
| 192.168.2.108 | datacenter1 | rack1 | 0001000130 | 192.168.2.108

Please help me resolving this problem...

Viewing all 387 articles
Browse latest View live




Latest Images