Skip to content

Instantly share code, notes, and snippets.

@mambocab
Last active August 29, 2015 14:28
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mambocab/b90c1b90d57653450e3c to your computer and use it in GitHub Desktop.
Save mambocab/b90c1b90d57653450e3c to your computer and use it in GitHub Desktop.
ERROR [SharedPool-Worker-2] 2015-08-25 12:46:27,839 Message.java:611 - Unexpected exception during request; channel = [id: 0xf0ea910e, /127.0.0.1:57948 => /127.0.0.1:9042]
java.lang.AssertionError: null
at org.apache.cassandra.db.partitions.PartitionUpdate.add(PartitionUpdate.java:541) ~[main/:na]
at org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:85) ~[main/:na]
at org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:824) ~[main/:na]
at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:611) ~[main/:na]
at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:599) ~[main/:na]
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204) ~[main/:na]
at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:235) ~[main/:na]
at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:220) ~[main/:na]
at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123) ~[main/:na]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) [main/:na]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) [main/:na]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
# This reporoduces an error I've been seeing running disk space tests. This
# harness creates a cluster running trunk with ccm and calls a Python script
# that demonstrates the error. That Python script creates a table with 100 UUID
# columns, then runs two insert statements. The first inserts values into 8
# columns, which succeeds, and the second inserts into 9 and fails.
# The failed insert results in an error in the C* logs, so this script prints
# the end of the node's log, starting with the ERROR line. If that doesn't show
# the error, you can page through the log yourself with `ccm node1 showlog`.
# Be aware that this script leaves the cluster running.
cluster_name=repro-too-many-values
set -x
ccm stop
ccm switch $cluster_name && ccm remove
ccm create $cluster_name -n 1 -v git:trunk &&
ccm start --wait-for-binary-proto &&
python ./too_many_values_with_many_columns.py
ccm node1 showlog | grep '^ERROR' -A1000 | cat
from __future__ import print_function
from cassandra.cluster import Cluster
if __name__ == '__main__':
session = Cluster().connect()
session.execute("CREATE KEYSPACE ks WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' };")
# create table with 100 uuid columns
session.execute("""
CREATE TABLE ks.tab (key uuid, c00 uuid, c01 uuid, c02 uuid, c03 uuid,
c04 uuid, c05 uuid, c06 uuid, c07 uuid, c08 uuid,
c09 uuid, c10 uuid, c11 uuid, c12 uuid, c13 uuid,
c14 uuid, c15 uuid, c16 uuid, c17 uuid, c18 uuid,
c19 uuid, c20 uuid, c21 uuid, c22 uuid, c23 uuid,
c24 uuid, c25 uuid, c26 uuid, c27 uuid, c28 uuid,
c29 uuid, c30 uuid, c31 uuid, c32 uuid, c33 uuid,
c34 uuid, c35 uuid, c36 uuid, c37 uuid, c38 uuid,
c39 uuid, c40 uuid, c41 uuid, c42 uuid, c43 uuid,
c44 uuid, c45 uuid, c46 uuid, c47 uuid, c48 uuid,
c49 uuid, c50 uuid, c51 uuid, c52 uuid, c53 uuid,
c54 uuid, c55 uuid, c56 uuid, c57 uuid, c58 uuid,
c59 uuid, c60 uuid, c61 uuid, c62 uuid, c63 uuid,
c64 uuid, c65 uuid, c66 uuid, c67 uuid, c68 uuid,
c69 uuid, c70 uuid, c71 uuid, c72 uuid, c73 uuid,
c74 uuid, c75 uuid, c76 uuid, c77 uuid, c78 uuid,
c79 uuid, c80 uuid, c81 uuid, c82 uuid, c83 uuid,
c84 uuid, c85 uuid, c86 uuid, c87 uuid, c88 uuid,
c89 uuid, c90 uuid, c91 uuid, c92 uuid, c93 uuid,
c94 uuid, c95 uuid, c96 uuid, c97 uuid, c98 uuid,
PRIMARY KEY (key)
) WITH bloom_filter_fp_chance=0.010000 AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'}
AND compression={'sstable_compression': ''};
""")
print('inserting 8 values (should succeed)')
session.execute("""INSERT INTO ks.tab (key, c70, c60, c58, c18, c98, c90, c32) VALUES (
7c1309b7-06c4-423b-966c-d56695cad550,
c1529ac8-dc7a-4359-9a0d-29137f713298,
3aefda72-ed13-420c-9e5f-c6d11dcb03ba,
a05ad853-ca34-4c73-aaa5-7641437e0691,
eab0d1f6-2310-4de5-91e8-0ddaeebb8c3d,
49f7bfd9-cfc5-4e6a-b823-2b6139a48e6d,
6f5e0867-e3b8-4ff1-ad09-e9710114276b,
f9edaa45-d40c-4c9a-bf09-9771cbd10228)""")
print('inserting 9 values (fails)')
session.execute("""INSERT INTO ks.tab (key, c70, c60, c58, c18, c98, c90, c32, c12) VALUES (
a05ad853-ca34-4c73-aaa5-7641437e0691,
7c1309b7-06c4-423b-966c-d56695cad550,
c1529ac8-dc7a-4359-9a0d-29137f713298,
3aefda72-ed13-420c-9e5f-c6d11dcb03ba,
eab0d1f6-2310-4de5-91e8-0ddaeebb8c3d,
49f7bfd9-cfc5-4e6a-b823-2b6139a48e6d,
6f5e0867-e3b8-4ff1-ad09-e9710114276b,
f9edaa45-d40c-4c9a-bf09-9771cbd10228,
89cde784-a20a-4bc5-8954-f7f8a90c515b)""")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment