Skip to content

Commit cb120b3

Browse files
committed
Add docs
1 parent 5b29972 commit cb120b3

File tree

2 files changed

+41
-9
lines changed

2 files changed

+41
-9
lines changed
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
import asyncio
2+
3+
from crawlee.storages import Dataset
4+
5+
6+
async def main() -> None:
7+
# Named storage (persists across runs)
8+
dataset_named = await Dataset.open(name='my-persistent-dataset')
9+
10+
# Unnamed storage with alias (purged on start)
11+
dataset_unnamed = await Dataset.open(alias='temporary-results')
12+
13+
# Default unnamed storage (both are equivalent and purged on start)
14+
dataset_default = await Dataset.open()
15+
dataset_default = await Dataset.open(alias='default')
16+
17+
18+
if __name__ == '__main__':
19+
asyncio.run(main())

docs/guides/storages.mdx

Lines changed: 22 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ import Tabs from '@theme/Tabs';
99
import TabItem from '@theme/TabItem';
1010
import RunnableCodeBlock from '@site/src/components/RunnableCodeBlock';
1111

12+
import OpeningExample from '!!raw-loader!roa-loader!./code_examples/storages/opening.py';
13+
1214
import RqBasicExample from '!!raw-loader!roa-loader!./code_examples/storages/rq_basic_example.py';
1315
import RqWithCrawlerExample from '!!raw-loader!roa-loader!./code_examples/storages/rq_with_crawler_example.py';
1416
import RqWithCrawlerExplicitExample from '!!raw-loader!roa-loader!./code_examples/storages/rq_with_crawler_explicit_example.py';
@@ -26,7 +28,9 @@ import KvsWithCrawlerExplicitExample from '!!raw-loader!roa-loader!./code_exampl
2628
import CleaningDoNotPurgeExample from '!!raw-loader!roa-loader!./code_examples/storages/cleaning_do_not_purge_example.py';
2729
import CleaningPurgeExplicitlyExample from '!!raw-loader!roa-loader!./code_examples/storages/cleaning_purge_explicitly_example.py';
2830

29-
Crawlee offers several storage types for managing and persisting your crawling data. Request-oriented storages, such as the <ApiLink to="class/RequestQueue">`RequestQueue`</ApiLink>, help you store and deduplicate URLs, while result-oriented storages, like <ApiLink to="class/Dataset">`Dataset`</ApiLink> and <ApiLink to="class/KeyValueStore">`KeyValueStore`</ApiLink>, focus on storing and retrieving scraping results. This guide helps you choose the storage type that suits your needs.
31+
Crawlee offers several storage types for managing and persisting your crawling data. Request-oriented storages, such as the <ApiLink to="class/RequestQueue">`RequestQueue`</ApiLink>, help you store and deduplicate URLs, while result-oriented storages, like <ApiLink to="class/Dataset">`Dataset`</ApiLink> and <ApiLink to="class/KeyValueStore">`KeyValueStore`</ApiLink>, focus on storing and retrieving scraping results. This guide explains when to use each type, how to interact with them, and how to control their lifecycle.
32+
33+
## Overview
3034

3135
Crawlee's storage system consists of two main layers:
3236
- **Storages** (<ApiLink to="class/Dataset">`Dataset`</ApiLink>, <ApiLink to="class/KeyValueStore">`KeyValueStore`</ApiLink>, <ApiLink to="class/RequestQueue">`RequestQueue`</ApiLink>): High-level interfaces for interacting with different storage types.
@@ -70,6 +74,21 @@ Storage --|> KeyValueStore
7074
Storage --|> RequestQueue
7175
```
7276

77+
### Named and unnamed storages
78+
79+
Crawlee supports two types of storages:
80+
81+
- **Named storages**: Persistent storages with a specific name that persist across runs. These are useful when you want to share data between different crawler runs or access the same storage from multiple places.
82+
- **Unnamed storages**: Temporary storages identified by an alias that are scoped to a single run. These are automatically purged at the start of each run (when `purge_on_start` is enabled, which is the default).
83+
84+
### Default storage
85+
86+
Each storage type (<ApiLink to="class/Dataset">`Dataset`</ApiLink>, <ApiLink to="class/KeyValueStore">`KeyValueStore`</ApiLink>, <ApiLink to="class/RequestQueue">`RequestQueue`</ApiLink>) has a default instance that can be accessed without specifying `id`, `name` or `alias`. Default unnamed storage is accessed by calling storage's `open` method without parameters. This is the most common way to use storages in simple crawlers. The special alias `"default"` is equivalent to calling `open` without parameters
87+
88+
<RunnableCodeBlock className="language-python" language="python">
89+
{OpeningExample}
90+
</RunnableCodeBlock>
91+
7392
## Request queue
7493

7594
The <ApiLink to="class/RequestQueue">`RequestQueue`</ApiLink> is the primary storage for URLs in Crawlee, especially useful for deep crawling. It supports dynamic addition of URLs, making it ideal for recursive tasks where URLs are discovered and added during the crawling process (e.g., following links across multiple pages). Each Crawlee project has a **default request queue**, which can be used to store URLs during a specific run.
@@ -186,13 +205,7 @@ Crawlee provides the following helper function to simplify interactions with the
186205

187206
## Cleaning up the storages
188207

189-
By default, Crawlee automatically cleans up **default storages** before each crawler run to ensure a clean state. This behavior is controlled by the <ApiLink to="class/Configuration#purge_on_start">`Configuration.purge_on_start`</ApiLink> setting (default: `True`).
190-
191-
### What gets purged
192-
193-
- **Default storages** are completely removed and recreated at the start of each run, ensuring that you start with a clean slate.
194-
- **Named storages** are never automatically purged and persist across runs.
195-
- The behavior depends on the storage client implementation.
208+
By default, Crawlee cleans up all unnamed storages (including the default one) at the start of each run, so every crawl begins with a clean state. This behavior is controlled by <ApiLink to="class/Configuration#purge_on_start">`Configuration.purge_on_start`</ApiLink> (default: True). In contrast, named storages are never purged automatically and persist across runs. The exact behavior may vary depending on the storage client implementation.
196209

197210
### When purging happens
198211

@@ -221,6 +234,6 @@ Note that purging behavior may vary between storage client implementations. For
221234

222235
## Conclusion
223236

224-
This guide introduced you to the different storage types available in Crawlee and how to interact with them. You learned how to manage requests using the <ApiLink to="class/RequestQueue">`RequestQueue`</ApiLink> and store and retrieve scraping results using the <ApiLink to="class/Dataset">`Dataset`</ApiLink> and <ApiLink to="class/KeyValueStore">`KeyValueStore`</ApiLink>. You also discovered how to use helper functions to simplify interactions with these storages. Finally, you learned how to clean up storages before starting a crawler run.
237+
This guide introduced you to the different storage types available in Crawlee and how to interact with them. You learned about the distinction between named storages (persistent across runs) and unnamed storages with aliases (temporary and purged on start). You discovered how to manage requests using the <ApiLink to="class/RequestQueue">`RequestQueue`</ApiLink> and store and retrieve scraping results using the <ApiLink to="class/Dataset">`Dataset`</ApiLink> and <ApiLink to="class/KeyValueStore">`KeyValueStore`</ApiLink>. You also learned how to use helper functions to simplify interactions with these storages and how to control storage cleanup behavior.
225238

226239
If you have questions or need assistance, feel free to reach out on our [GitHub](https://github.com/apify/crawlee-python) or join our [Discord community](https://discord.com/invite/jyEM2PRvMU). Happy scraping!

0 commit comments

Comments
 (0)