Skip to content

Commit 7b2f3b9

Browse files
authored
Python RocksDB WideColumn Database (#44)
* Python RocksDB WideColumn Database * added section for data modeling and key construction * added description for implementation * sections for reflection * added conclusion * rename article
1 parent 908bd36 commit 7b2f3b9

File tree

1 file changed

+125
-0
lines changed

1 file changed

+125
-0
lines changed
Lines changed: 125 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,125 @@
1+
---
2+
layout: post
3+
comments: true
4+
title: Pinterest's Wide Column Database in Python with RocksDB
5+
excerpt: Building a simplified version of Pinterest's a Wide Column Database in Python with RocksDB
6+
categories: database
7+
tags: [python,rocksdb]
8+
toc: true
9+
img_excerpt:
10+
---
11+
12+
13+
In a recent article on [Pinterest Engineering Blog](https://medium.com/pinterest-engineering/building-pinterests-new-wide-column-database-using-rocksdb-f5277ee4e3d2), they desribed in details how they implemented in C++ a RocksDB-based distributed wide column database called **Rockstorewidecolumn**. While their system tackles petabytes and millions of requests per second with a distributed architecture, the core concepts of mapping a wide column data model onto a key-value store like RocksDB are fascinating.
14+
15+
This article explore how to implement a simpler, single-instance version of Pinterest's **Rockstorewidecolumn** in Python using the power and efficiency of RocksDB.
16+
17+
18+
### What's a Wide Column Database, Anyway?
19+
20+
Think beyond traditional relational tables with fixed schemas. A wide column database offers:
21+
22+
* **Rows:** Each identified by a unique `row_key`.
23+
* **Flexible Columns:** Each row can have a different set and number of columns. No predefined schema for all rows!
24+
* **Columnar Data:** Data is organized by columns within a row.
25+
* **Versioned Cells:** Often, values within a column can have multiple versions, typically timestamped.
26+
27+
This model is great for use cases like user profiles where users have varying attributes which can be available for some users and not for others, time-series data, or, as Pinterest showed, storing user event sequences.
28+
29+
### From Wide Columns to Simple Keys & Values
30+
31+
RocksDB is an incredibly fast embedded key-value store. However, It doesn't inherently understand "rows," "columns," or "versions." It just knows keys and values, and both of which are byte strings. Our main task is to cleverly design a **key structure** that lets us represent our wide column model.
32+
33+
From Pinterest's article, the Data Model Mapping (or **Logical View**) from Wide Columns to Key-Value looks like this
34+
35+
* **Dataset:** A collection of data for a use case (like a table).
36+
* **Row:** Identified by a `row_key` (e.g., `user123`), contains items.
37+
* **Item:** A `column_name` identifying a specific attribute within a row (e.g., `email`, `last_login_event`) with a list of versioned cells.
38+
* **Cell:** A `timestamp` when this specific piece of data was recorded (e.g., milliseconds since epoch) and a `column_value` (the actual data).
39+
40+
To store a specific cell (a value for a given dataset, row, column, and time), we can simply concatenate these elements into a single RocksDB key and use a separator like the null byte `\x00`. The choice of the good separator is crutial as so what we don't confuse it with characters from the other attributes. This **Storage View** is visually explained with the following diagram.
41+
42+
```
43+
+----------------------+-----------------+-------------------+-----------------+-----------------------+-----------------+-----------------------------------------+
44+
| dataset_name_bytes | KEY_SEPARATOR | row_key_bytes | KEY_SEPARATOR | column_name_bytes | KEY_SEPARATOR | timestamp_bytes |
45+
+----------------------+-----------------+-------------------+-----------------+-----------------------+-----------------+-----------------------------------------+
46+
| (String as UTF-8) | (Null Byte `\0`)| (String as UTF-8) | (Null Byte `\0`)| (String as UTF-8) | (Null Byte `\0`)| (8-byte uint64, Big-Endian, Inverted) |
47+
+----------------------+-----------------+-------------------+-----------------+-----------------------+-----------------+-----------------------------------------+
48+
```
49+
50+
One other thing to consider in the implementation is the versioning, and the ability to retrieve the latest versions of a column first.
51+
52+
For this, we can use a Timestamp trick that leverages the fact that RocksDB sorts keys lexicographically in ascending order. In fact, we can get a descending order for timestamps as follows:
53+
* Use integer timestamps (e.g., milliseconds since epoch).
54+
* Store `MAX_POSSIBLE_TIMESTAMP - actual_timestamp`.
55+
* Pack this inverted timestamp as a fixed-length, big-endian byte string (e.g., using Python's `struct.pack('>Q', inverted_timestamp)` for an 8-byte unsigned integer).
56+
57+
This way, newer (smaller inverted) timestamps will sort before older ones.
58+
59+
Here is a complete Python snippet that demostrates how a Key is constructed:
60+
61+
```python
62+
import struct
63+
SEPARATOR = b"\x00"
64+
dataset = b"user_profile"
65+
row_key = b"user123"
66+
column_name = b"email"
67+
timestamp_ms = 1678886400000
68+
MAX_UINT64 = 2**64 - 1
69+
inverted_ts_bytes = struct.pack('>Q', MAX_UINT64 - timestamp_ms)
70+
# The RocksDB key might look like
71+
dataset + SEPARATOR + row_key + SEPARATOR + column_name + SEPARATOR + inverted_ts_bytes
72+
```
73+
74+
Which results in a Key that looks like this:
75+
76+
```python
77+
b'user_profile\x00user123\x00email\x00\xff\xff\xfey\x1a\x92\x8f\xff'
78+
```
79+
80+
### Python Implementation
81+
82+
The full implementation of this Datastore can be found at this [GitHub KVWC project](https://github.com/dzlab/vibecoding/tree/main/kvwc), specifically in the [WideColumnDB](https://github.com/dzlab/vibecoding/blob/main/kvwc/wide_column_db.py) class.
83+
84+
Here are some key points from this implementation:
85+
86+
* **`_encode_key` / `_decode_key`:** facilitate translating our logical model to and from RocksDB's byte strings.
87+
* **`put_row`:** Takes a list of items for a row. Each item can optionally specify a timestamp. If not, the current server time is used. All writes for a single `put_row` call are wrapped in a `rocksdb.WriteBatch` for atomicity at the row-key level for that call.
88+
* **`get_row`:** the most complex method
89+
* It uses RocksDB's iterators and `seek()` operations.
90+
* To get all columns for a row, it seeks to `row_key_bytes + SEPARATOR`.
91+
* To get specific columns, it can either iterate and filter or seek to `row_key_bytes + SEPARATOR + column_name_bytes + SEPARATOR`.
92+
* It collects up to `num_versions` for each requested column, respecting the (optional) time range.
93+
* **`delete_row`:** Also uses iterators to find all keys matching the criteria (entire row, specific columns, or even specific versions) and deletes them using a `WriteBatch`.
94+
95+
### Unlocking Wide Column Features
96+
97+
With the chosen key structure in our implementation, several wide column features become quite natural:
98+
99+
* **Versioned Values:** Automatically handled by including the timestamp in the key. Each update (even an "overwrite" of a conceptual column) with a new timestamp creates a new, distinct entry in RocksDB.
100+
* **Time Range Queries:** The `get_row` method can filter versions based on `start_timestamp` and `end_timestamp` by examining the decoded timestamp from the key.
101+
* **Out-of-Order Updates:** Clients can provide their own timestamps for data, allowing for backfills or event-time recording.
102+
* **TTL (Time-to-Live):**
103+
* **Read-time enforcement:** When reading, check `key_timestamp + configured_ttl < current_timestamp`. If expired, don't return it.
104+
* **Physical deletion:** This is trickier for a simple implementation. RocksDB's compactions will eventually remove deleted data. A more advanced system might use RocksDB's compaction filters or a background process to scan and delete expired keys.
105+
106+
### What's Next?
107+
108+
The Trade-offs made in our Python implementation, makes it the datastore surprisingly useful for smaller-scale applications:
109+
110+
* **Single Instance:** It's not distributed, so no built-in replication, sharding, or high availability like Pinterest's Rockstorewidecolumn.
111+
* **Basic Compaction:** Relies on RocksDB's default compaction unless you delve into advanced configurations or custom filters for TTL.
112+
* **Pagination:** The `get_row` example above doesn't include pagination for very wide rows (many columns). This would require returning a "continuation token" (e.g., the last key part processed) for the client to pass in the next request.
113+
114+
Here are few things to consider if we were to expand this implementation:
115+
116+
* **Dataset/Table Management:** Consider using RocksDB's "Column Families" for better logical separation of different datasets (tables) within a single DB instance.
117+
* **Advanced TTL:** Implement custom compaction filters or background jobs for efficient TTL enforcement.
118+
* **Robust Pagination:** Add proper marker-based pagination to `get_row`.
119+
* **Serialization:** Use a more robust serialization format than plain strings for values (e.g., JSON, MessagePack, Protobuf).
120+
121+
122+
## That's all folks
123+
This article walkthrough the implementation of a simplified version of Pinterest's Rockstorewidecolumn. We demonstrated that by carefully designing a key structure, we can map complex data models onto a high-performance key-value store like RocksDB.
124+
125+
I hope you enjoyed this article, feel free to leave a comment or reach out on twitter [@bachiirc](https://twitter.com/bachiirc).

0 commit comments

Comments
 (0)