-
Notifications
You must be signed in to change notification settings - Fork 55
Description
Sorry if this issue is due to mis-usage of the API.
I consider a table with a MAP(STRING,DOUBLE) column. I end witth a BufferOverflowException if I append enough, data, while not considering a large problem.
Specifically, I can reproduce the issue with a table with a single column, and Maps with 2 entries, with small keys:
@Slf4j
public class TestDuckDbMapOverflow {
@Test
public void testOverflowMap() throws SQLException {
DuckDBConnection ddbC = (DuckDBConnection) DriverManager.getConnection("jdbc:duckdb:");
ddbC.createStatement().execute("CREATE TABLE someT (\"col\" MAP(STRING, DOUBLE));");
DuckDBAppender appender = ddbC.createAppender("someT");
int i = 0 ;
while (true) {
try {
appender.beginRow();
appender.append(Map.of("key_1_" + i, 1.0D * i, "key_2_" + i, -1.0D * i));
appender.endRow();
} catch (Throwable t) {
log.error("Issue for i=" + i, t);
break;
}
i++;
}
}
}
It fails at row #1024 with:
2025-10-30 14:17:19.978 ERROR main TestDuckDbMapOverflow: 29 - Issue for i=1024
java.nio.BufferOverflowException: null
at java.base/java.nio.Buffer.nextPutIndex(Buffer.java:744) ~[?:?]
at java.base/java.nio.DirectByteBuffer.putInt(DirectByteBuffer.java:785) ~[?:?]
at org.duckdb.DuckDBAppender.putStringOrBlob(DuckDBAppender.java:1300) ~[duckdb_jdbc-1.4.1.0.jar:?]
at org.duckdb.DuckDBAppender.putCompositeElement(DuckDBAppender.java:1844) ~[duckdb_jdbc-1.4.1.0.jar:?]
at org.duckdb.DuckDBAppender.putCompositeElementStruct(DuckDBAppender.java:2020) ~[duckdb_jdbc-1.4.1.0.jar:?]
at org.duckdb.DuckDBAppender.putMap(DuckDBAppender.java:1782) ~[duckdb_jdbc-1.4.1.0.jar:?]
at org.duckdb.DuckDBAppender.append(DuckDBAppender.java:920) ~[duckdb_jdbc-1.4.1.0.jar:?]
at some_project.TestDuckDbMapOverflow.testOverflowMap(TestDuckDbMapOverflow.java:26) ~[test-classes/:?]
We see some ByteBuffer overflows while adding the String key. In my real-life case, the issue was on .putDouble(), while I consider larger maps, with larger String keys. I suppose both issues has the same root.
(Of course, I'm not looking for unbounded insertion. I do not expect 1024 rows of such small Maps to be a problem. I was not able to pin-point what's wrong in the byteBuffer allocation, or in the .flush decision.).
Workaround: DuckDBAppender.flush(). Though, it is unclear to me when such a .flush should be called if not on each row.