Quantcast
Channel: DataStax Support Forums » Recent Topics
Viewing all 387 articles
Browse latest View live

brwnba2010 on "Datastax ODBC driver error '...ThriftHiveClient: Unknown: errno = 10053'"

$
0
0

Installed Datastax CE and then installed the Datastax ODBC driver on a client machine, Turned off Windows Firewall, here are my settings and the error message when attempting to test the ODBC driver. Note I am using Skytap VM images in this environment:

Host is set to the ip of the host: 10.20.25.3 in my case.
Port :8888
Database: default
Advanced setting are their default settings:

Row fetched per block: 10000
Default string column length: 255
Everything else unchecked.

ODBC Driver hangs, and then after killing with Task Manager. I get this ODBC error.

Connector Version: V1.0.0.1007

Running connectivity tests...

Attempting connection
Failed to establish connection
SQLSTATE: HY000[DataStax][Hardy] (22) Error from ThriftHiveClient: Unknown: errno = 10053

TESTS COMPLETED WITH ERROR.


troper on "OpsCenter 3.1 Upgrade Failure"

$
0
0

I am running a 3 node development cluster on AWS EC2 using the Datastax AMI. DSE 3.0.1, Cassandra 1.1.9.3.

I upgraded to OpsCenter 3.1. I restarted OpsCenter, clicked "fix" agents, entered credentials and refeshed. All agents connected. OpsCenter loads, but no keyspaces are visible and the error "Error loading alerts: Alerts have not been loaded yet. There may be a connectivity problem with Cassandra" is displayed.

My column families were created with CQL3 and use compound primary keys. I cannot see any schema info after the upgrade.

I would appreciate any ideas on how to resolve this issue!

Log entries:

2013-04-29 18:57:06+0000 [] INFO: Testing SSH connectivity to 10.115.11.199, 10.218.41.204, 10.78.222.11
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 2048 09:47:46:51:d5:f9:30:f2:56:7f:dc:56:d6:18:de:a0 _ (RSA) for 10.218.41.204
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 256 49:ad:ca:6b:1b:17:8f:f6:8e:3b:8a:c6:7e:3c:aa:e5 _ (ECDSA) for 10.218.41.204
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 1024 c8:97:cf:53:3c:aa:e3:c4:74:92:2e:92:7d:0b:31:1f _ (DSA) for 10.218.41.204
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 2048 d7:fa:27:c5:8a:da:3b:69:00:8b:31:87:93:1a:a7:3b _ (RSA) for 10.78.222.11
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 2048 ae:6f:39:a2:37:9d:ea:75:94:f7:09:9e:b5:d7:05:1a _ (RSA) for 10.115.11.199
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 256 e2:9d:2d:f1:fe:2e:86:7b:b3:f5:4a:81:bf:59:fe:d1 _ (ECDSA) for 10.78.222.11
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 1024 aa:d8:90:28:2b:9f:8d:df:84:86:9d:c7:32:4c:a7:fb _ (DSA) for 10.78.222.11
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 256 6c:b4:e1:2b:36:d5:ac:6c:a7:25:b4:77:61:fc:d1:ae _ (ECDSA) for 10.115.11.199
2013-04-29 18:57:07+0000 [] INFO: Obtained fingerprint 1024 c8:93:7b:67:d0:04:14:ea:cd:26:56:6f:ac:27:6e:58 _ (DSA) for 10.115.11.199
2013-04-29 18:57:08+0000 [] INFO: Testing SSH connectivity to 10.115.11.199, 10.218.41.204, 10.78.222.11
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 2048 09:47:46:51:d5:f9:30:f2:56:7f:dc:56:d6:18:de:a0 _ (RSA) for 10.218.41.204
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 2048 d7:fa:27:c5:8a:da:3b:69:00:8b:31:87:93:1a:a7:3b _ (RSA) for 10.78.222.11
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 256 49:ad:ca:6b:1b:17:8f:f6:8e:3b:8a:c6:7e:3c:aa:e5 _ (ECDSA) for 10.218.41.204
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 2048 ae:6f:39:a2:37:9d:ea:75:94:f7:09:9e:b5:d7:05:1a _ (RSA) for 10.115.11.199
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 256 e2:9d:2d:f1:fe:2e:86:7b:b3:f5:4a:81:bf:59:fe:d1 _ (ECDSA) for 10.78.222.11
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 1024 c8:97:cf:53:3c:aa:e3:c4:74:92:2e:92:7d:0b:31:1f _ (DSA) for 10.218.41.204
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 256 6c:b4:e1:2b:36:d5:ac:6c:a7:25:b4:77:61:fc:d1:ae _ (ECDSA) for 10.115.11.199
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 1024 aa:d8:90:28:2b:9f:8d:df:84:86:9d:c7:32:4c:a7:fb _ (DSA) for 10.78.222.11
2013-04-29 18:57:09+0000 [] INFO: Obtained fingerprint 1024 c8:93:7b:67:d0:04:14:ea:cd:26:56:6f:ac:27:6e:58 _ (DSA) for 10.115.11.199
2013-04-29 18:57:09+0000 [DEV_CLUSTER] INFO: Beginning install of OpsCenter agent to 10.115.11.199
2013-04-29 18:57:09+0000 [DEV_CLUSTER] INFO: Installing debian package on 10.115.11.199
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Beginning install of OpsCenter agent to 10.218.41.204
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Beginning install of OpsCenter agent to 10.78.222.11
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Version: {'search': None, 'dse': '3.0.1', 'tasktracker': None, 'jobtracker': None, 'cassandra': '1.1.9.3'}
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Node 10.115.11.199 changed its mode to normal
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Using 10.115.11.199 as the RPC address for node 10.115.11.199
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Using 10.115.11.199 as the RPC address for node 10.115.11.199
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Installing debian package on 10.218.41.204
2013-04-29 18:57:24+0000 [DEV_CLUSTER] INFO: Installing debian package on 10.78.222.11
2013-04-29 18:57:38+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-04-29 18:57:38+0000 [DEV_CLUSTER] INFO: Node 10.78.222.11 changed its mode to normal
2013-04-29 18:57:38+0000 [DEV_CLUSTER] INFO: Using 10.78.222.11 as the RPC address for node 10.78.222.11
2013-04-29 18:57:38+0000 [DEV_CLUSTER] INFO: Using 10.78.222.11 as the RPC address for node 10.78.222.11
2013-04-29 18:57:40+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-04-29 18:57:40+0000 [DEV_CLUSTER] INFO: Using 10.218.41.204 as the RPC address for node 10.218.41.204
2013-04-29 18:57:40+0000 [DEV_CLUSTER] INFO: Node 10.218.41.204 changed its mode to normal
2013-04-29 18:57:40+0000 [DEV_CLUSTER] INFO: Using 10.218.41.204 as the RPC address for node 10.218.41.204
2013-04-29 20:52:01+0000 [] INFO: No handlers could be found for logger "xhtml2pdf"
2013-04-30 13:06:15+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-04-30 13:06:15+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-04-30 13:06:15+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-01 14:59:03+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-01 14:59:03+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-01 14:59:03+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-01 21:42:33+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-01 21:42:33+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-01 21:44:59+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-01 21:44:59+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-01 21:54:52+0000 [] ERROR: Problem while calling CFDataController: unpack requires a string argument of length 8
2013-05-01 21:54:52+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-01 21:54:52+0000 [] ERROR: Problem while calling CFDataController: unpack requires a string argument of length 8
2013-05-01 21:54:52+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-01 21:56:42+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-01 21:56:42+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-01 22:03:42+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-01 22:03:42+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-01 22:18:07+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-01 22:18:07+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-01 22:18:07+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-01 22:18:07+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-02 14:07:12+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-02 14:07:12+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-02 14:07:12+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-02 14:09:30+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-02 14:09:30+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-02 15:34:25+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-02 15:34:25+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-02 15:36:44+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-02 15:36:44+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-02 16:08:10+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-02 16:08:10+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-02 16:08:10+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-02 16:08:10+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-02 17:50:35+0000 [] ERROR: Problem while calling CFDataController: year is out of range
2013-05-02 17:50:35+0000 [] ERROR:
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 845, in getCFPage

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 753, in _unpack_rows

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 771, in _unpack_columns

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 785, in _unpack_col

File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftVals.py", line 334, in unpack_date

2013-05-03 02:07:44+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-03 02:07:44+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-03 02:07:44+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-03 14:07:45+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-03 14:07:45+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-03 14:07:45+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-04 02:08:20+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-04 02:08:20+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-04 02:08:20+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-04 14:08:27+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-04 14:08:27+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-04 14:08:27+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-05 02:08:37+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-05 02:08:37+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-05 02:08:37+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-05 14:08:48+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-05 14:08:48+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-05 14:08:48+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-06 02:08:53+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-06 02:08:53+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-06 02:08:53+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-06 14:08:54+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-06 14:08:54+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-06 14:08:54+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-07 02:09:47+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-07 02:09:47+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-07 02:09:47+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-07 15:42:19+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-07 15:42:19+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-07 15:42:19+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-08 15:06:24+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-08 15:06:24+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-08 15:06:24+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-10 18:12:27+0000 [DEV_CLUSTER] INFO: Agent for ip 10.78.222.11 is version u'3.0.2'
2013-05-10 18:12:27+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-10 18:12:28+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-10 19:32:19+0000 [] INFO: Received SIGTERM, shutting down.
2013-05-10 19:32:19+0000 [DEV_CLUSTER] INFO: OpsCenter shutting down.
2013-05-10 19:32:19+0000 [] INFO: (TCP Port 61620 Closed)
2013-05-10 19:32:19+0000 [] INFO: (TCP Port 61619 Closed)
2013-05-10 19:32:19+0000 [] INFO: Stopping factory <morbid.morbid.StompFactory instance at 0x2e2e128>
2013-05-10 19:32:19+0000 [] INFO: (TCP Port 8888 Closed)
2013-05-10 19:32:19+0000 [] INFO: Stopping factory <opscenterd.WebServer.OpsCenterdWebServer instance at 0x2e28440>
2013-05-10 19:32:19+0000 [DEV_CLUSTER] INFO: Stopping CassandraCluster service
2013-05-10 19:32:19+0000 [DEV_CLUSTER] INFO: OpsCenter shutting down.
2013-05-10 19:32:19+0000 [] INFO: Main loop terminated.
2013-05-10 19:32:19+0000 [] INFO: Server Shut Down.
2013-05-10 19:32:27+0000 [] INFO: Log opened.
2013-05-10 19:32:27+0000 [] INFO: twistd 10.2.0 (/usr/bin/python2.7 2.7.3) starting up.
2013-05-10 19:32:27+0000 [] INFO: reactor class: twisted.internet.selectreactor.SelectReactor.
2013-05-10 19:32:27+0000 [] INFO: set uid/gid 0/0
2013-05-10 19:32:27+0000 [] INFO: Logging level set to 'info'
2013-05-10 19:32:27+0000 [] INFO: OpsCenter version: 3.1.0
2013-05-10 19:32:27+0000 [] INFO: Compatible agent version: 3.1.0
2013-05-10 19:32:27+0000 [] INFO: Loading per-cluster config file /etc/opscenter/clusters/DEV_CLUSTER.conf
2013-05-10 19:32:27+0000 [] INFO: HTTP BASIC authentication disabled
2013-05-10 19:32:27+0000 [] INFO: Starting webserver with ssl disabled.
2013-05-10 19:32:27+0000 [] INFO: SSL agent communication enabled
2013-05-10 19:32:27+0000 [] INFO: opscenterd.WebServer.OpsCenterdWebServer starting on 8888
2013-05-10 19:32:27+0000 [] INFO: Starting factory <opscenterd.WebServer.OpsCenterdWebServer instance at 0x3fd33f8>
2013-05-10 19:32:27+0000 [] INFO: morbid.morbid.StompFactory starting on 61619
2013-05-10 19:32:27+0000 [] INFO: Starting factory <morbid.morbid.StompFactory instance at 0x3ff2fc8>
2013-05-10 19:32:27+0000 [] INFO: Configuring agent communication with ssl support enabled.
2013-05-10 19:32:27+0000 [] INFO: morbid.morbid.StompFactory starting on 61620
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Starting services for cluster DEV_CLUSTER
2013-05-10 19:32:27+0000 [] INFO: Starting PushService
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Starting CassandraCluster service
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: agent_config items: {'cassandra_log_location': '/var/log/cassandra/system.log', 'thrift_port': 9160, 'thrift_ssl_truststore': None, 'rollups300_ttl': 2419200, 'rollups86400_ttl': -1, 'jmx_port': 7199, 'metrics_ignored_solr_cores': '', 'api_port': '61621', 'metrics_enabled': 1, 'thrift_ssl_truststore_type': 'JKS', 'kerberos_use_ticket_cache': True, 'kerberos_renew_tgt': True, 'rollups60_ttl': 604800, 'cassandra_install_location': '', 'rollups7200_ttl': 31536000, 'kerberos_debug': False, 'storage_keyspace': 'OpsCenter', 'ec2_metadata_api_host': '169.254.169.254', 'provisioning': 0, 'kerberos_use_keytab': True, 'metrics_ignored_column_families': '', 'thrift_ssl_truststore_password': None, 'metrics_ignored_keyspaces': 'system, system_traces, system_auth, dse_auth, OpsCenter'}
2013-05-10 19:32:27+0000 [] INFO: OS Version: Linux version 3.2.0-35-virtual (buildd@allspice) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #55-Ubuntu SMP Wed Dec 5 18:02:05 UTC 2012
2013-05-10 19:32:27+0000 [] INFO: CPU Info: ['2666.760', '2666.760']
2013-05-10 19:32:27+0000 [] INFO: Mem Info: 17079MB
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Cluster Name: DEV_CLUSTER
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Snitch: com.datastax.bdp.snitch.DseDelegateSnitch
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Partitioner: org.apache.cassandra.dht.RandomPartitioner
2013-05-10 19:32:27+0000 [] INFO: Package Manager: aptitude
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Recognizing new node 10.115.11.199 ('113427455640312821154458202477256070485')
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Recognizing new node 10.218.41.204 ('0')
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Recognizing new node 10.78.222.11 ('56713727820156410577229101238628035242')
2013-05-10 19:32:27+0000 [DEV_CLUSTER] ERROR: Error getting keyspaces: [Failure instance: Traceback: <type 'exceptions.KeyError'>: 'cf_test'
/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py:361:callback
/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py:455:_startRunCallbacks
/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py:542:_runCallbacks
/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py:1076:gotResult
--- <exception caught here> ---
/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py:1020:_inlineCallbacks
/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py:408:getNonCompactCQL3ColumnFamilies
]
2013-05-10 19:32:27+0000 [DEV_CLUSTER] ERROR: Error when attempting to create OpsCenter schema: (KeyError) 'cf_test'
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Sleeping for 10s before retrying schema creation
2013-05-10 19:32:27+0000 [] INFO: Unhandled error in Deferred:
2013-05-10 19:32:27+0000 [] Unhandled Error
Traceback (most recent call last):
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 361, in callback
self._startRunCallbacks(result)
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 455, in _startRunCallbacks
self._runCallbacks()
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 542, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1076, in gotResult
_inlineCallbacks(r, g, deferred)
--- <exception caught here> ---
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python2.7/dist-packages/opscenterd/ThriftService.py", line 408, in getNonCompactCQL3ColumnFamilies

exceptions.KeyError: 'cf_test'

2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Version: {'search': None, 'dse': '3.0.1', 'tasktracker': None, 'jobtracker': None, 'cassandra': '1.1.9.3'}
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Node 10.218.41.204 changed its mode to normal
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Node 10.115.11.199 changed its mode to normal
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Node 10.78.222.11 changed its mode to normal
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Using 10.78.222.11 as the RPC address for node 10.78.222.11
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Using 10.218.41.204 as the RPC address for node 10.218.41.204
2013-05-10 19:32:27+0000 [DEV_CLUSTER] INFO: Using 10.115.11.199 as the RPC address for node 10.115.11.199
2013-05-10 19:32:29+0000 [DEV_CLUSTER] INFO: Agent for ip 10.218.41.204 is version u'3.0.2'
2013-05-10 19:32:29+0000 [DEV_CLUSTER] INFO: Agent for ip 10.115.11.199 is version u'3.0.2'
2013-05-10 19:32:29+0000 [DEV_CLUSTER] IN

blair on "Java Driver: way to get schema versions"

$
0
0

When making many schema changes in a row I've gotten errors when the new column families haven't fully "settled." I work around this with the Astyanax client by using the following Scala code:

def waitForSchemaToSettle[R](keyspace: Keyspace)(body: => R): R = {
    var preSchemaVersions = keyspace.describeSchemaVersions
    while (preSchemaVersions.size != 1) {
      Thread.sleep(1000)
      var preSchemaVersions = keyspace.describeSchemaVersions
    }

    val result = body

    var postSchemaVersions = keyspace.describeSchemaVersions
    while (postSchemaVersions == preSchemaVersions ||
      postSchemaVersions.size != 1) {
      Thread.sleep(1000)
      postSchemaVersions = keyspace.describeSchemaVersions
    }

    result
  }

Looking through the Java Driver API I don't see a way to get the schema versions. Is there something similar I can do with the Java Driver?

Thanks,
Blair

PS Having a dedicated Java Driver forum/mailing list would be useful.

Gibu on "java driver InvalidQueryException: no keyspace has been specified"

$
0
0

I was running the SimpleClient example provided with Java Driver for Apache Cassandra 1.0 using CQL 3.

I spent quite some time to figure out why I was getting the error. I get the error "no keyspace has been specified" even though I explicitly mention the keyspace in the query. Later figured out that I need to explicitly set it in the "session" in the case of PreparedStatement usage as seen in the code attached.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Connected to cluster: MyCluster
Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: no keyspace has been specified
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:32)
at com.datastax.driver.core.ResultSetFuture.extractCause(ResultSetFuture.java:242)
at com.datastax.driver.core.Session.toPreparedStatement(Session.java:243)
at com.datastax.driver.core.Session.prepare(Session.java:167)
at com.example.cassandra.SimpleClient.loadDataUsingBoundStatements(SimpleClient.java:30)
at com.example.cassandra.SimpleClient.main(SimpleClient.java:152)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: no keyspace has been specified
at com.datastax.driver.core.ResultSetFuture.convertException(ResultSetFuture.java:272)

Can someone confirm if this is the expected behavior as I am setting it in the query as keyspace.table name.

Code attached.

Thanks
Gibu
ps: If its the expected behaviour, we could change the tutorial here to include that.
http://www.datastax.com/doc-source/developer/java-driver/index.html#quick_start/qsSimpleClientBoundStatements_t.html

//$Id$
package com.example.cassandra;

import com.datastax.driver.core.Cluster;

import com.datastax.driver.core.Host;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.SimpleStatement;
import com.datastax.driver.core.Metadata;
import com.datastax.driver.core.*;
import java.util.*;

public class SimpleClient {
private Cluster cluster;
private Session session;

public Session getSession() {

return session;
}

public void loadDataUsingBoundStatements() {

//Bug ? comment this line and this gives an exception that there is no keyspace set even though its set in the query.
//getSession().execute("use simplex");

PreparedStatement statement = getSession().prepare(
"INSERT INTO simplex.songs " +
"(id, title, album, artist, tags) " +
"VALUES (?, ?, ?, ?, ?);");
BoundStatement boundStatement = new BoundStatement(statement);
Set<String> tags = new HashSet<String>();
tags.add("jazz");
tags.add("2013");
getSession().execute(boundStatement.bind(
UUID.fromString("756716f7-2e54-4715-9f00-91dcbea6cf50"),
"La Petite Tonkinoise'",
"Bye Bye Blackbird'",
"Joséphine Baker",
tags ) );
statement = getSession().prepare(
"INSERT INTO simplex.playlists " +
"(id, song_id, title, album, artist) " +
"VALUES (?, ?, ?, ?, ?);");
boundStatement = new BoundStatement(statement);
getSession().execute(boundStatement.bind(
UUID.fromString("2cc9ccb7-6221-4ccb-8387-f22b6a1b354d"),
UUID.fromString("756716f7-2e54-4715-9f00-91dcbea6cf50"),
"La Petite Tonkinoise",
"Bye Bye Blackbird",
"Joséphine Baker") );
}

public void querySchema() {
ResultSet results = session.execute("SELECT * FROM simplex.playlists " +
"WHERE id = 2cc9ccb7-6221-4ccb-8387-f22b6a1b354d;");
System.out.println("\n"+String.format("%-30s\t%-20s\t%-20s\n%s", "title", "album", "artist",
"-------------------------------+-----------------------+--------------------"));
for (Row row : results) {
System.out.println(String.format("%-30s\t%-20s\t%-20s", row.getString("title"),
row.getString("album"), row.getString("artist")));
}
System.out.println();

}

public void loadData() {
session.execute(
"INSERT INTO simplex.songs (id, title, album, artist, tags) " +
"VALUES (" +
"756716f7-2e54-4715-9f00-91dcbea6cf50," +
"'La Petite Tonkinoise'," +
"'Bye Bye Blackbird'," +
"'Joséphine Baker'," +
"{'jazz', '2013'})" +
";");
session.execute(
"INSERT INTO simplex.playlists (id, song_id, title, album, artist) " +
"VALUES (" +
"2cc9ccb7-6221-4ccb-8387-f22b6a1b354d," +
"756716f7-2e54-4715-9f00-91dcbea6cf50," +
"'La Petite Tonkinoise'," +
"'Bye Bye Blackbird'," +
"'Joséphine Baker'" +
");");

}

public void createSchema() {

session.execute("CREATE KEYSPACE simplex WITH replication " +
"= {'class':'SimpleStrategy', 'replication_factor':1};");

session.execute(
"CREATE TABLE simplex.songs (" +
"id uuid PRIMARY KEY," +
"title text," +
"album text," +
"artist text," +
"tags set<text>," +
"data blob" +
");");
session.execute(
"CREATE TABLE simplex.playlists (" +
"id uuid," +
"title text," +
"album text, " +
"artist text," +
"song_id uuid," +
"PRIMARY KEY (id, title, album, artist)" +
");");

}

public void connect(String node) {
cluster = Cluster.builder()
.addContactPoint(node).build();
session = cluster.connect();
Metadata metadata = cluster.getMetadata();
System.out.printf("Connected to cluster: %s\n",
metadata.getClusterName());
for ( Host host : metadata.getAllHosts() ) {
System.out.printf("Datatacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack());
}
}

public void close() {
cluster.shutdown();
}

public static void main(String[] args) {
SimpleClient client = new SimpleClient();
client.connect("localhost");
try {
client.createSchema();
}
catch(Exception alreadyExists)
{

System.out.print("Exception: Schema already exists.");
}
//client.loadData();
client.loadDataUsingBoundStatements();
client.querySchema();

client.close();
}
}

upant on "Bulk-loading error in DataStax AMI"

$
0
0

Hi,

I am having problem running bulk-loading using SSTableSimpleUnsortedWriter class in DataStax AMI (with default configurations). I tried bulk-loading example at http://www.datastax.com/dev/blog/bulk-loading.

Here is what I did:
- Created 3 nodes cluster by following instructions at http://www.datastax.com/docs/datastax_enterprise3.0/install/install_dse_ami (DataStax AMI).
- Created keyspace and column families as mentioned in the example and was able to write/read data using CLI.
- Created a Java class using example code at http://www.datastax.com/wp-content/uploads/2011/08/DataImportExample.java.
- While compiling the class, I got an error. The code listed in the example doesn't have partitioner argument in SSTableSimpleUnsortedWriter constructor. So, passing RandomPartitioner the constructor fixed the error.
- While running the code, I got following error:
Error instantiating snitch class 'com.datastax.bdp.snitch.DseDelegateSnitch'.
Fatal configuration error; unable to start server.

However, the same code in Apache Cassandra (non-DataStax distribution) runs perfectly without any issue or any additional configuration.

Is there any additional configuration needed to make it working in DataStax Cassandra or it is a bug? Has anybody tried sstableloader in DSE?

Thanks in advance,
Uddhab

jwlee on "dse-3.0 - Reindexing is not working for Solr copyfield"

$
0
0

Hi,

I have an existing single instance of DSE Search/Solr.
After adding some copyfields to the schema (and posted), I did a Re-index/Full-index via the DSE Search/Solr Admin UI.
The new copyfields are still empty after the re-indexing.

In addition, I have also tried the follow to rebuild index from command line as mentioned in the docs,
Follow the below steps to rebuild your solr indexes.
1) Stop DSE node
2) Delete all sub-directories under /var/lib/cassandra/data/solr.data/directory.
3) Start DSE in search mode.
4) Run nodetool rebuild_index for each column family.

Similarly, the copyfields are empty.

Any help is much appreciated!

Thanks!
Regards,
JW

blueash on "Amazon AMI Setup Video"

$
0
0

Hi

Is there a video of Amazon AMI setup?

thanks,
-ash

royw on "gradual deterioration of Hive performance"

$
0
0

Hi,

We are developing daily batch processing with Hive (DSE 3.0). As the more tables being loaded and processed, we observing a distinct slow down of Hive for identical HQL batch process (operating on batch data of appropriately the same size).

One distinct factor that appears to be related to the slow down is the increasing delay between the result table generation and the MapReduce job completion. Following is an example of the symptom that we are observing: given the following simple hql execution, from the execution log, last log entry's time stamp is at 17:53:53, and the target file shows time stamp of 17:54:18 (converted from UTC), -- so there's a 25 seconds gap between the output table being created after the processing finished. When we started off such daily processing, this gap would be about 1 or 2 seconds, and as more data being processed, we are observing this delay gradually increases to the current 25 seconds. Our current the DSE node size is about 80GB.

Based on CFS design, we couldn't think of any possible reason how the amount of data already stored would negatively affect new data insertion. I am wondering if anyone has also experienced similar problem? Or any idea what configuration options could contribute to this?

thanks,
Roy

##HQL output:
> insert overwrite table tmp_bind_00_20130116A
> select distinct
> concat(cast(site_id as string),'-',bind_id) as bind_guid
> , bind_date
> , day_of_week
> , bind_id as ori_bind_id
> , site_id
> , device_id
> , substring(bind_date,1,10) as bind_day
> from rawdata_00_20130116
> ;
Automatically selecting local only mode for query
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Execution log at: /tmp/root/root_20130513175353_58d446d0-b8c5-4206-9d22-bdb692c98d14.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-05-13 17:53:15,452 null map = 0%, reduce = 0%
2013-05-13 17:53:21,456 null map = 100%, reduce = 0%
2013-05-13 17:53:27,459 null map = 100%, reduce = 100%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Loading data to table staging.tmp_bind_00_20130116A
Table staging.tmp_bind_00_20130116A stats: [num_partitions: 0, num_files: 0, num_rows: 21685, total_size: 0, raw_data_size: 2233555]
OK
bind_guid bind_date day_of_week ori_bind_id site_id device_id bind_day
Time taken: 73.574 seconds

##excerpt of execution log (/tmp/root/root_20130513175353_58d446d0-b8c5-4206-9d22-bdb692c98d14.log):

2013-05-13 17:53:39,467 INFO exec.ExecDriver (SessionState.java:printInfo(391)) - Ended Job = job_local_0001
2013-05-13 17:53:39,473 INFO exec.FileSinkOperator (Utilities.java:mvFileToFinalPath(1267)) - Moving tmp dir: cfs://IVM-CRS-VM41/tmp/hive-root/hive_2013-05-13_17-53-05_049_2835076602177774524/_tmp.-ext-10000 to: cfs://IVM-CRS-VM41/tmp/hive-root/hive_2013-05-13_17-53-05_049_2835076602177774524/_tmp.-ext-10000.intermediate
2013-05-13 17:53:53,348 INFO exec.FileSinkOperator (Utilities.java:mvFileToFinalPath(1278)) - Moving tmp dir: cfs://IVM-CRS-VM41/tmp/hive-root/hive_2013-05-13_17-53-05_049_2835076602177774524/_tmp.-ext-10000.intermediate to: cfs://IVM-CRS-VM41/tmp/hive-root/hive_2013-05-13_17-53-05_049_2835076602177774524/-ext-10000

## time stamp of target table:
> dfs -stat /user/hive/warehouse/staging.db/tmp_bind_00_20130116A/000000_0;
2013-05-13 21:54:18


matt.lieber on "DSE 3.0 with Solr - support for dynamic column family"

$
0
0

hi,

I want to use a Cassandra dynamic CF in DSE 3.0 that would use around 20,000 dynamic (e.g. added on the fly) columns (or composite columns in CQL 3 parlance).
I also want to use Solr in DSE, in order to search on these column names on that CF. I cannot create a Solr schema.xml for these 20,000 column names obviously, since they are not static and that sounds like a way too big schema.xml file anyway.

A solution to this may be dynamic fields in Solr: I see that there is the concept of dynamic fields in Solr in the doc, but the doc (http://www.datastax.com/docs/datastax_enterprise2.0/search/dse_search_schema) says we are limited to 1024 dynamic fields only in Solr ..
Any solution to this ? Has this limit been / or going to be bumped up in DSE 3.0.1 ? Any other solution to be able to use Solr to search on 20k columns ?

thanks

gdelgado on "package opscenter is not ready for configuration cannot configure (current status `half-installed&#"

$
0
0

I am installing DataStax Enterprise for evaluation the install is running on a Ubuntu 13.04 server. When running "sudo apt-get install dse-full opscenter" I am getting the error below any help will be appreciated.

Reading package lists... Done
Building dependency tree
Reading state information... Done
opscenter is already the newest version.
The following NEW packages will be installed:
dse-full
0 upgraded, 1 newly installed, 0 to remove and 11 not upgraded.
1 not fully installed or removed.
Need to get 0 B/60.6 MB of archives.
After this operation, 49.2 kB of additional disk space will be used.
Do you want to continue [Y/n]? Y
Selecting previously unselected package dse-full.
(Reading database ... 54702 files and directories currently installed.)
Unpacking dse-full (from .../dse-full_3.0.1-1_all.deb) ...
dpkg: error processing opscenter (--configure):
package opscenter is not ready for configuration
cannot configure (current status `half-installed')
No apport report written because MaxReports is reached already
Setting up dse-full (3.0.1-1) ...
Errors were encountered while processing:
opscenter
E: Sub-process /usr/bin/dpkg returned an error code (1)

datastaxnewbie on "Cassandra timestamp to Solr TrieDateField"

$
0
0

Hi,

I have a StartTime column in Cassandra that is typed as timestamp, and I created a corresponding field in Solr schema that is of TrieDateField. When I stored the following timestamp into Cassandra, I get back the same value (4/15/2012 12:26:22 PM). When I retrieve the document through Solr using the same row key (id), the value that I get back for that same field is "20114840-09-19T02:26:40Z", which doesn't appear to be an ISO8601 format either. Do I need to set the precisionStep in Solr schema for that field? What am I doing wrong?

Thanks,

Paul

gdelgado on "Start Errored: Timed out waiting for Cassandra to start."

$
0
0

I am evaluating DataStax Enterprise 3.1.0 I was able to install the OpsCenter server but when trying to add nodes to the cluster I get the following error "Start Errored: Timed out waiting for Cassandra to start." I have both servers in the same security group in AWS. Any help would be appreciated..

Thanks..

gdelgado on "DataStax AMI not working"

$
0
0

I am using following article to launch the pre-configured DataStax AMI http://www.datastax.com/docs/datastax_enterprise3.0/install/install_dse_ami. I have tried to launch the AMI twice and have gotten the same result I get the following error telling me to check the ami.log file for errors. when I check the log i get the message below.

[ERROR] EC2 is experiencing some issues and has not allocated all of the resources in u$ in under 10 minutes

Aborting the clustering of this reservation. Please try again.
[ERROR] Exception seen in ds1_launcher.py:
Traceback (most recent call last):
File "/home/ubuntu/datastax_ami/ds1_launcher.py", line 31, in initial_configurations
ds2_configure.run()
File "/home/ubuntu/datastax_ami/ds2_configure.py", line 945, in run
File "/home/ubuntu/datastax_ami/ds2_configure.py", line 385, in get_seed_list
File "/home/ubuntu/datastax_ami/ds2_configure.py", line 55, in exit_path
AttributeError

stuartbeattie on "Cassandra startup error."

$
0
0

Datastax Commuunity Edition 1.2.3 windows 7 64bit

Ive been evaluating the latest community edition of cassandra on a single machine test environment. As such Im making constant changes to the schema and data. The following error keeps cropping up and I am unable to restart the service without removing the offending commit log file. I havent done much research with regards to setup and configuration. Am I missing something simple here?

Thanks,
Stuart

ERROR [main] 2013-04-28 13:41:59,713 CassandraDaemon.java (line 415) Exception encountered during startup
java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
at com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:305)
at com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:284)
at com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:45)
at org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:96)
at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:76)
at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:31)
at java.util.TreeMap.put(Unknown Source)
at org.apache.cassandra.db.TreeMapBackedSortedColumns.addColumn(TreeMapBackedSortedColumns.java:102)
at org.apache.cassandra.db.TreeMapBackedSortedColumns.addColumn(TreeMapBackedSortedColumns.java:88)
at org.apache.cassandra.db.AbstractColumnContainer.addColumn(AbstractColumnContainer.java:109)
at org.apache.cassandra.db.AbstractColumnContainer.addColumn(AbstractColumnContainer.java:104)
at org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:101)
at org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:376)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:203)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:98)
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:146)
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:126)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:269)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:398)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:441)

datastaxnewbie on "Byte array from Solr to Cassandra CF (DSE 3.0.1)"

$
0
0

Hi,

What is the proper way to create a bytes array in solr as well as reflect it in C* CF as blob or something similar? Looks like it's always creating the same column in C* as int. The schema that is loaded to create the CF is below (plus the CF description in cql). Thanks.

<?xml version="1.0" encoding="UTF-8" ?>
<schema name="DateTest" version="1.1">
<types>
<fieldType name="uuid" class="solr.UUIDField"/>
<fieldType name="bytes" class="solr.ByteField"/>
</types>
<fields>
<field name="id" type="uuid" indexed="true" stored="true"/>
<field name="MyData" type="bytes" indexed="false" stored="true"/>

</fields>

<uniqueKey>id</uniqueKey>

</schema>

-------------C* Table after loading schema----------

cqlsh:test> describe table "DateTest";

CREATE TABLE "DateTest" (
"KEY" uuid PRIMARY KEY,
"MyData" int,
"_docBoost" text,
"_solr_schema.xml" text,
"_solr_schema.xml.bak" text,
"_solr_solrconfig.xml" text,
"_solr_solrconfig.xml.bak" text,
"_ttl_expire" bigint,
solr_query text
) WITH
comment='' AND
caching='KEYS_ONLY' AND
read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
replicate_on_write='true' AND
compaction_strategy_class='SizeTieredCompactionStrategy' AND
compression_parameters:sstable_compression='SnappyCompressor';

CREATE INDEX test_DateTest_MyData_index ON "DateTest" ("MyData");

CREATE INDEX test_DateTest__docBoost_index ON "DateTest" ("_docBoost");

CREATE INDEX DateTest__solr_schemaxml_idx ON "DateTest" ("_solr_schema.xml");

CREATE INDEX DateTest__solr_schemaxmlbak_idx ON "DateTest" ("_solr_schema.xml.bak");

CREATE INDEX DateTest__solr_solrconfigxml_idx ON "DateTest" ("_solr_solrconfig.xml");

CREATE INDEX DateTest__solr_solrconfigxmlbak_idx ON "DateTest" ("_solr_solrconfig.xml.bak");

CREATE INDEX test_DateTest__ttl_expire_index ON "DateTest" ("_ttl_expire");

CREATE INDEX test_DateTest_solr_query_index ON "DateTest" (solr_query);


felipe.abezerra on "DSE 3.0 with Internal Authentication and Hadoop?"

$
0
0

Hi,

Can somebody tell me if it's possible to have in a same cluster, nodes with SOLR and Hadoop using Internal Authentication?

Regards,

matt.lieber on "DSE 3.0 with Solr - support for integer type?"

yienyien on "File offset bug ?"

$
0
0

Hi,
I deploy a cassandra in a EC2 instance. I use 1.2.3 cassandra. I not found any response.

1) I have a very big Keyspace (among 10 ColumnFamily, lot of write), I want manage a second keyspace (ClientWorld), small, few write.
2)
> cassandra-cli

CREATE KEYSPACE ClientWorld;

USE ClientWorld;

CREATE COLUMN FAMILY ClientMeta with comparator=UTF8Type and key_validation_class=UTF8Type;
CREATE COLUMN FAMILY ClientMeta2 with comparator=UTF8Type and key_validation_class=UTF8Type;

SET ClientMeta[utf8('0')][utf8('username')]=utf8('toto');
SET ClientMeta2[utf8('0')][utf8('username')]=utf8('toto');

3)
> cassandra-cli
USE ClientWorld;
[default@ClientWorld] GET ClientMeta[utf8('0')];
=> (column=username, value=746f746f, timestamp=1366303494180000)
Returned 1 results.
Elapsed time: 1.32 msec(s).
[default@ClientWorld] GET ClientMeta2[utf8('0')];
=> (column=username, value=746f746f, timestamp=1366303494206000)
Returned 1 results.
Elapsed time: 1.85 msec(s).

4)
> nodetool snapshot ClientWorld

5)
> cassandra-cli
USE ClientWorld;
[default@ClientWorld] GET ClientMeta[utf8('0')];
null
TimedOutException()
at org.apache.cassandra.thrift.Cassandra$get_slice_result.read(Cassandra.java:7874)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(Cassandra.java:594)
at org.apache.cassandra.thrift.Cassandra$Client.get_slice(Cassandra.java:578)
at org.apache.cassandra.cli.CliClient.doSlice(CliClient.java:548)
at org.apache.cassandra.cli.CliClient.executeGet(CliClient.java:684)
at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:216)
at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)
[default@ClientWorld] GET ClientMeta2[utf8('0')];
=> (column=username, value=746f746f, timestamp=1366303494206000)
Returned 1 results.
Elapsed time: 1.97 msec(s).

6) Cassandra logs:
java.lang.RuntimeException: java.lang.IllegalArgumentException: unable to seek to position 3188 in /mnt/md0/cassandra/data/ClientWorld/ClientMeta/ClientWorld-ClientMeta-ib-1-Data.db (54 bytes) in read-only mode
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: unable to seek to position 3188 in /mnt/md0/cassandra/data/ClientWorld/ClientMeta/ClientWorld-ClientMeta-ib-1-Data.db (54 bytes) in read-only mode
at org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:306)
at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:42)
at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:978)
at org.apache.cassandra.db.columniterator.SimpleSliceReader.<init>(SimpleSliceReader.java:60)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:68)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:44)
at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101)
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:275)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1363)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1220)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1132)
at org.apache.cassandra.db.Table.getRow(Table.java:348)
at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:70)
at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
... 3 more

7) If I run
> nodetool compact ClientWorld ClientMeta
, all becomes allright.
8)
> cassandra-cli
USE ClientWorld;
[default@ClientWorld] GET ClientMeta[utf8('0')];
=> (column=username, value=746f746f, timestamp=1366303494180000)
Returned 1 results.
Elapsed time: 2.03 msec(s).
[default@ClientWorld] GET ClientMeta2[utf8('0')];
=> (column=username, value=746f746f, timestamp=1366303494206000)
Returned 1 results.
Elapsed time: 1.45 msec(s).

9) I can reproduce this on my production database but not on my local cassandra.

Thank you for any help.

whoisvolos on "Writing to cfs with Flume"

$
0
0

Hello, Everyone!

I'd like to write unstructured logs to underlying DSE's CFS like I did it with hdfs-sink from Flume with Hadoop/HDFS. Sinks like Logsandra requires creating keyspaces and columnfamilies and then writes data in that columnfamily, not in cfs column family, but I'd like write only in file system (cfs columnfamily), for further work with Hive without external tables.

Is it possible? I haven't found sinks for it. May be I can use standart hdfs sink? But which ports should i set up in sink?

cb55555 on "How do you change the data location?"

$
0
0

I have a packaged deployment of DataStax Enterprise on Ubuntu. I got the DSE service up to make sure everything works. Then I shut the service down because I wanted the data to go on a different directory: /mnt/datadrive/lib/cassandra

I updated the /etc/dse/cassandra/cassandra.yaml file in the following locations:

data_file_directories:
- /mnt/datadrive/lib/cassandra/data

commitlog_directory: /mnt/datadrive/lib/cassandra/commitlog

saved_caches_directory: /mnt/datadrive/lib/cassandra/saved_caches

Basically, wherever there was a reference to the /var/lib/cassandra folder, I changed to /mnt/datadrive/lib/cassandra.

After staring up the service, I checked the status with sudo service dse status, and I found that the dse daemon wasn't running.

I'm assuming that all of the data files are created in the appropriate directories when the service starts up, since I've blown out the /var/lib/cassandra folder and restarted the service just to make sure. However, when I changed the data directory, the service won't start up. There must be more places than cassandra.yaml that reference the data directory.

Viewing all 387 articles
Browse latest View live




Latest Images