diff --git a/platform-cloud/cloud-sidebar.json b/platform-cloud/cloud-sidebar.json
index d894b23eb..904d71535 100644
--- a/platform-cloud/cloud-sidebar.json
+++ b/platform-cloud/cloud-sidebar.json
@@ -6,7 +6,7 @@
"label": "Tutorials",
"collapsed": true,
"items": [
- "getting-started/quickstart-demo/comm-showcase",
+ "quickstart",
"getting-started/rnaseq",
"getting-started/proteinfold",
"getting-started/studios",
diff --git a/platform-cloud/docs/administration/credit-management.md b/platform-cloud/docs/administration/credit-management.md
index 17d3f659b..ae309eea8 100644
--- a/platform-cloud/docs/administration/credit-management.md
+++ b/platform-cloud/docs/administration/credit-management.md
@@ -101,6 +101,7 @@ To request more credits:
When your organization or workspace credit balance is exhausted:
1. **Running pipelines paused**: All active pipeline runs and Studio sessions are automatically suspended.
+1. **Seqera Compute buckets locked**: Data can no longer be browsed or downloaded from Data Explorer.
1. **New launches blocked**: No new pipeline runs or Studios can be started using Seqera Compute environments.
1. **Resume runs manually**: After purchasing additional credits, manually [resume](../launch/cache-resume.mdx) paused pipelines.
diff --git a/platform-cloud/docs/compute-envs/seqera-compute.md b/platform-cloud/docs/compute-envs/seqera-compute.md
index bf2242e72..69a2ac407 100644
--- a/platform-cloud/docs/compute-envs/seqera-compute.md
+++ b/platform-cloud/docs/compute-envs/seqera-compute.md
@@ -52,7 +52,7 @@ Seqera Compute has default workspace limits on compute environments, and organiz
| us-east-2 (Ohio, USA) | eu-central-1 (Frankfurt, Germany) | |
| us-west-1 (Northern California, USA) | eu-west-3 (Paris, France) | |
:::
-1. Configure any advanced options described in the next section, as needed.
+1. Configure any [advanced options](#advanced-options-optional) described in the next section, as needed.
1. Select **Add** to complete the Seqera Compute environment configuration and return to the compute environments list. It will take a few seconds for the compute environment resources to be created before you are ready to launch pipelines or add studios.
:::info
diff --git a/platform-cloud/docs/getting-started/quickstart-demo/add-data.md b/platform-cloud/docs/getting-started/quickstart-demo/add-data.md
index e4947b672..2b2bff3d7 100644
--- a/platform-cloud/docs/getting-started/quickstart-demo/add-data.md
+++ b/platform-cloud/docs/getting-started/quickstart-demo/add-data.md
@@ -55,12 +55,12 @@ In Data Explorer, you can:

- **View bucket contents**:
- Select a bucket name from the list to view the bucket contents. The file type, size, and path of objects are displayed in columns next to the object name. For example, view the outputs of your [nf-core/rnaseq](./comm-showcase#launch-the-nf-corernaseq-pipeline) run:
+ Select a bucket name from the list to view the bucket contents. The file type, size, and path of objects are displayed in columns next to the object name. For example, view the outputs of your [nf-core/rnaseq](../../quickstart.md#nf-corernaseq) run:

- **Preview files**:
- Select a file to open a preview window that includes a **Download** button. For example, view the resultant gene counts of the salmon quantification step of your [nf-core/rnaseq](./comm-showcase#launch-the-nf-corernaseq-pipeline) run:
+ Select a file to open a preview window that includes a **Download** button. For example, view the resultant gene counts of the salmon quantification step of your [nf-core/rnaseq](../../quickstart.md#nf-corernaseq) run:

diff --git a/platform-cloud/docs/getting-started/quickstart-demo/comm-showcase.md b/platform-cloud/docs/getting-started/quickstart-demo/comm-showcase.md
deleted file mode 100644
index f8a9c60dd..000000000
--- a/platform-cloud/docs/getting-started/quickstart-demo/comm-showcase.md
+++ /dev/null
@@ -1,350 +0,0 @@
----
-title: "Explore Platform Cloud"
-description: "Seqera Platform Cloud demonstration walkthrough"
-date created: "8 Jul 2024"
-last updated: "14 June 2025"
-tags: [platform, launch, pipelines, launchpad, showcase tutorial]
-toc_max_heading_level: 3
----
-
-:::info
-This demo tutorial provides an introduction to Seqera Platform, including instructions to:
-- Launch, monitor, and optimize the [*nf-core/rnaseq*](https://github.com/nf-core/rnaseq) pipeline.
-- Select pipeline input data with [Data Explorer](../../data/data-explorer) and Platform [datasets](../../data/datasets).
-- Perform interactive analysis of pipeline results with [Studios](../../studios/overview).
-
-The Platform Community Showcase is a Seqera-managed demonstration workspace with all the resources needed to follow along with this tutorial. All [Seqera Cloud](https://cloud.seqera.io) users have access to this example workspace by default.
-:::
-
-The Launchpad in every Platform workspace allows users to easily create and share Nextflow pipelines that can be executed on any supported infrastructure, including all public clouds and most HPC schedulers. A Launchpad pipeline consists of a pre-configured workflow repository, [compute environment](../../compute-envs/overview), and launch parameters.
-
-The Community Showcase contains 15 preconfigured pipelines, including [*nf-core/rnaseq*](https://github.com/nf-core/rnaseq), a bioinformatics pipeline used to analyze RNA sequencing data.
-
-The workspace also includes three preconfigured AWS Batch compute environments to run Community Showcase pipelines, and various Platform datasets and public data sources (accessed via Data Explorer) to use as pipeline input.
-
-:::note
-To skip this Community Showcase demo and start running pipelines on your own infrastructure:
-1. Set up an [organization workspace](../workspace-setup).
-1. Create a workspace [compute environment](../../compute-envs/overview) for your cloud or HPC compute infrastructure.
-1. [Add pipelines](./add-pipelines) to your workspace.
-:::
-
-## Launch the nf-core/rnaseq pipeline
-
-:::note
-This guide is based on version 3.14.0 of the *nf-core/rnaseq* pipeline. Launch form parameters may differ in other versions.
-:::
-
-Navigate to the Launchpad in the `community/showcase` workspace and select **Launch** next to the *nf-core-rnaseq* pipeline to open the launch form.
-
- 
-
-The launch form consists of **General config**, **Run parameters**, and **Advanced options** sections to specify your run parameters before execution, and an execution summary. Use section headings or select the **Previous** and **Next** buttons at the bottom of the page to navigate between sections.
-
-
- Nextflow parameter schema
-
- The launch form lets you configure the pipeline execution. The pipeline parameters in this form are rendered from a [pipeline schema](../../pipeline-schema/overview) file in the root of the pipeline Git repository. `nextflow_schema.json` is a simple JSON-based schema describing pipeline parameters for pipeline developers to easily adapt their in-house Nextflow pipelines to be executed in Platform.
-
- :::tip
- See [Best Practices for Deploying Pipelines with the Seqera Platform](https://seqera.io/blog/best-practices-for-deploying-pipelines-with-seqera-platform/) to learn how to build the parameter schema for any Nextflow pipeline automatically with tooling maintained by the nf-core community.
- :::
-
-
-
-### General config
-
-Most Showcase pipeline parameters are prefilled. Specify the following fields to identify your run amongst other workspace runs:
-
-- **Workflow run name**: A unique identifier for the run, pre-filled with a random name. This can be customized.
-- **Labels**: Assign new or existing labels to the run. For example, a project ID or genome version.
-
-### Run parameters
-
-There are three ways to enter **Run parameters** prior to launch:
-
-- The **Input form view** displays form fields to enter text, select attributes from dropdowns, and browse input and output locations with [Data Explorer](../../data/data-explorer).
-- The **Config view** displays a raw schema that you can edit directly. Select JSON or YAML format from the **View as** dropdown.
-- **Upload params file** allows you to upload a JSON or YAML file with run parameters.
-
-#### input
-
-Most nf-core pipelines use the `input` parameter in a standardized way to specify an input samplesheet that contains paths to input files (such as FASTQ files) and any additional metadata needed to run the pipeline. Use **Browse** to select either a file path in cloud storage via **Data Explorer**, or a pre-loaded **Dataset**:
-
-- In the **Data Explorer** tab, select the `nf-tower-data` bucket, then search for and select the `rnaseq_sample_data.csv` file.
-- In the **Datasets** tab, search for and select `rnaseq_sample_data`.
-
-
-
-:::tip
-See [Add data](./add-data) to learn how to add datasets and Data Explorer cloud buckets to your own workspaces.
-:::
-
-#### output
-
-Most nf-core pipelines use the `outdir` parameter in a standardized way to specify where the final results created by the pipeline are published. `outdir` must be unique for each pipeline run. Otherwise, your results will be overwritten.
-
-For this tutorial test run, keep the default `outdir` value (`./results`).
-
-:::tip
-For the `outdir` parameter in pipeline runs in your own workspace, select **Browse** to specify a cloud storage directory using Data Explorer, or enter a cloud storage directory path to publish pipeline results to manually.
-:::
-
-#### Pipeline-specific parameters
-
-Modify other parameters to customize the pipeline execution through the parameters form. For example, under **Read trimming options**, change the `trimmer` to select `fastp` in the dropdown menu instead of `trimgalore`.
-
-
-
-Select **Launch** to start the run and be directed to the **Runs** tab with your run in a **submitted** status at the top of the list.
-
-## View run information
-
-### Run details page
-
-As the pipeline runs, run details will populate with parameters, logs, and other important execution details:
-
-
- View run details
-
- - **Command-line**: The Nextflow command invocation used to run the pipeline. This contains details about the pipeline version (`-r 3.14.0` flag) and profile, if specified (`-profile test` flag).
- - **Parameters**: The exact set of parameters used in the execution. This is helpful for reproducing the results of a previous run.
- - **Resolved Nextflow configuration**: The full Nextflow configuration settings used for the run. This includes parameters, but also settings specific to task execution (such as memory, CPUs, and output directory).
- - **Execution Log**: A summarized Nextflow log providing information about the pipeline and the status of the run.
- - **Datasets**: Link to datasets, if any were used in the run.
- - **Reports**: View pipeline outputs directly in Platform.
-
- 
-
-
-
-### View reports
-
-Most Nextflow pipelines generate reports or output files which are useful to inspect at the end of the pipeline execution. Reports can contain quality control (QC) metrics that are important to assess the integrity of the results.
-
-
- View run reports
-
-
- 
-
- For example, for the *nf-core/rnaseq* pipeline, view the [MultiQC](https://docs.seqera.io/multiqc) report generated. MultiQC is a helpful reporting tool to generate aggregate statistics and summaries from bioinformatics tools.
-
- 
-
- The paths to report files point to a location in cloud storage (in the `outdir` directory specified during launch), but you can view the contents directly and download each file without navigating to the cloud or a remote filesystem.
-
- #### Specify outputs in reports
-
- To customize and instruct Platform where to find reports generated by the pipeline, a [tower.yml](https://github.com/nf-core/rnaseq/blob/master/tower.yml) file that contains the locations of the generated reports must be included in the pipeline repository.
-
- In the *nf-core/rnaseq* pipeline, the `MULTIQC` process step generates a MultiQC report file in HTML format:
-
- ```yaml
- reports:
- multiqc_report.html:
- display: "MultiQC HTML report"
- ```
-
-
-
-:::note
-See [Reports](../../reports/overview) to configure reports for pipeline runs in your own workspace.
-:::
-
-### View general information
-
-The **Run details** page includes general information about who executed the run and when, the Git hash and tag used, and additional details about the compute environment and Nextflow version used.
-
-
- View general run information
-
- 
-
- The **General** panel displays top-level information about a pipeline run:
-
- - Unique workflow run ID
- - Workflow run name
- - Timestamp of pipeline start (timezones are based on system settings)
- - Pipeline version and Git commit ID
- - Nextflow session ID
- - Username of the launcher
- - Work directory path
-
-
-
-### View process and task details
-
-Scroll down the page to view:
-
-- The progress of individual pipeline **Processes**
-- **Aggregated stats** for the run (total walltime, CPU hours)
-- **Workflow metrics** (CPU efficiency, memory efficiency)
-- A **Task details** table for every task in the workflow
-
-The task details table provides further information on every step in the pipeline, including task statuses and metrics:
-
-
- View task details
-
- Select a task in the task table to open the **Task details** dialog. The dialog has three tabs: **About**, **Execution log**, and **Data Explorer**.
-
- #### About
-
- The **About** tab includes:
-
- 1. **Name**: Process name and tag
- 2. **Command**: Task script, defined in the pipeline process
- 3. **Status**: Exit code, task status, and number of attempts
- 4. **Work directory**: Directory where the task was executed
- 5. **Environment**: Environment variables that were supplied to the task
- 6. **Execution time**: Metrics for task submission, start, and completion time (timezones are based on system settings)
- 7. **Resources requested**: Metrics for the resources requested by the task
- 8. **Resources used**: Metrics for the resources used by the task
-
- 
-
- #### Execution log
-
- The **Execution log** tab provides a real-time log of the selected task's execution. Task execution and other logs (such as stdout and stderr) are available for download from here, if still available in your compute environment.
-
-
-
-### Task work directory in Data Explorer
-
-If a task fails, a good place to begin troubleshooting is the task's work directory. Nextflow hash-addresses each task of the pipeline and creates unique directories based on these hashes.
-
-
- View task log and output files
-
- Instead of navigating through a bucket on the cloud console or filesystem, use the **Data Explorer** tab in the Task window to view the work directory.
-
- Data Explorer allows you to view the log files and output files generated for each task, directly within Platform. You can view, download, and retrieve the link for these intermediate files to simplify troubleshooting.
-
- 
-
-
-
-## Interactive analysis
-
-Interactive analysis of pipeline results is often performed in platforms like Jupyter Notebooks or using the [R-IDE](https://github.com/seqeralabs/r-ide). Setting up the infrastructure for these platforms, including accessing pipeline data and the necessary bioinformatics packages, can be complex and time-consuming.
-
-**Studios** streamlines the process of creating interactive analysis environments for Platform users. With built-in templates, creating a Studio is as simple as adding and sharing pipelines or datasets.
-
-### Analyze RNAseq data in Studios
-
-In the **Studios** tab, you can monitor and see the details of the Studios in the Community Showcase workspace.
-
-Studios is used to perform bespoke analysis on the results of upstream workflows. For example, in the Community Showcase workspace we have run the *nf-core/rnaseq* pipeline to quantify gene expression, followed by *nf-core/differentialabundance* to derive differential expression statistics. The workspace contains a Studio with these results from cloud storage mounted into the Studio to perform further analysis. One of these outputs is an RShiny application, which can be deployed for interactive analysis.
-
-#### Connect to the RNAseq analysis Studio
-
-Select the *rnaseq_to_differentialabundance* Studio. This Studio consists of an R-IDE that uses an existing compute environment available in the Community Showcase workspace. The Studio also contains mounted data generated from the *nf-core/rnaseq* and subsequent *nf-core/differentialabundance* pipeline runs, directly from AWS S3.
-
-
-
-Select **Connect** to view the running R-IDE session. The *rnaseq_to_differentialabundance* Studio includes the necessary R packages for deploying a web app to visualize the RNAseq data.
-
-Deploy the RShiny app in the Studio by selecting the play button on the last chunk of the R script:
-
-
-
-:::note
-You can specify the resources each Studio will use. When [you create your own Studios](../../studios/overview) with shared compute environment resources, you must allocate sufficient resources to the compute environment to prevent Studio or pipeline run interruptions.
-:::
-
-### Explore results
-
-The RShiny app will deploy in a separate browser window, providing a data interface. Here you can view information about your sample data, perform QC or exploratory analysis, and view the results of differential expression analyses.
-
-
-
-
- Sample clustering with PCA plots
-
- In the **QC/Exploratory** tab, select the PCA (Principal Component Analysis) plot to visualize how the samples group together based on their gene expression profiles.
-
- In this example, we used RNA sequencing data from the publicly-available ENCODE project, which includes samples from four different cell lines:
-
- - **GM12878**: a lymphoblastoid cell line
- - **K562**: a chronic myelogenous leukemia cell line
- - **MCF-7**: a breast cancer cell line
- - **H1-hESC**: human embryonic stem cells
-
- What to look for in the PCA plot:
-
- - **Replicate clustering**: Ideally, biological replicates of the same cell type should cluster closely together. For example, replicates of MCF-7 (breast cancer cell line) group together. This indicates consistent gene expression profiles among biological replicates.
- - **Cell type separation**: Different cell types should form distinct clusters. For instance, GM12878, K562, MCF-7, and H1-hESC samples should each form their own separate clusters, reflecting their unique gene expression patterns.
-
- From this PCA plot, you can gain insights into the consistency and quality of your sequencing data, identify any potential issues, and understand the major sources of variation among your samples - all directly in Platform.
-
- 
-
-
-
-
- Gene expression changes with Volcano plots
-
- In the **Differential** tab, select **Volcano plots** to compare genes with significant changes in expression between two samples. For example, filter for `Type: H1 vs MCF-7` to view the differences in expression between these two cell lines.
-
- 1. **Identify upregulated and downregulated genes**: The x-axis of the volcano plot represents the log2 fold change in gene expression between the H1 and MCF-7 samples, while the y-axis represents the statistical significance of the changes.
-
- - **Upregulated genes in MCF-7**: Genes on the left side of the plot (negative fold change) are upregulated in the MCF-7 samples compared to H1. For example, the SHH gene, which is known to be upregulated in cancer cell lines, prominently appears here.
-
- 2. **Filtering for specific genes**: If you are interested in specific genes, use the filter function. For example, filter for the SHH gene in the table below the plot. This allows you to quickly locate and examine this gene in more detail.
-
- 3. **Gene expression bar plot**: After filtering for the SHH gene, select it to navigate to a gene expression bar plot. This plot will show you the expression levels of SHH across all samples, allowing you to see in which samples it is most highly expressed.
-
- - Here, SHH is most highly expressed in MCF-7, which aligns with its known role in cancer cell proliferation.
-
- Using the volcano plot, you can effectively identify and explore the genes with the most significant changes in expression between your samples, providing a deeper understanding of the molecular differences.
-
- 
-
-
-
-### Collaborate in the Studio
-
-To share the results of your RNAseq analysis or allow colleagues to perform exploratory analysis, select the options menu for the Studio you want to share then select **Copy Studio URL**. With this link, other authenticated users with the **Connect** [role](../../orgs-and-teams/roles) (or greater) can access the session directly.
-
-:::note
-See [Studios](../../studios/overview) to learn how to create Studios in your own workspace.
-:::
-
-## Pipeline optimization
-
-Seqera Platform's task-level resource usage metrics allow you to determine the resources requested for a task and what was actually used. This information helps you fine-tune your configuration more accurately.
-
-However, manually adjusting resources for every task in your pipeline is impractical. Instead, you can leverage the pipeline optimization feature available on the Launchpad.
-
-Pipeline optimization analyzes resource usage data from previous runs to optimize the resource allocation for future runs. After a successful run, optimization becomes available, indicated by the lightbulb icon next to the pipeline turning black.
-
-
- Optimize nf-core/rnaseq
-
- Navigate back to the Launchpad and select the lightbulb icon next to the *nf-core/rnaseq* pipeline to view the optimized profile. You have the flexibility to tailor the optimization's target settings and incorporate a retry strategy as needed.
-
- #### View optimized configuration
-
- When you select the lightbulb, you can access an optimized configuration profile in the second tab of the **Customize optimization profile** window.
-
- This profile consists of Nextflow configuration settings for each process and each resource directive (where applicable): **cpus**, **memory**, and **time**. The optimized setting for a given process and resource directive is based on the maximum use of that resource across all tasks in that process.
-
- Once optimization is selected, subsequent runs of that pipeline will inherit the optimized configuration profile, indicated by the black lightbulb icon with a checkmark.
-
- :::note
- Optimization profiles are generated from one run at a time, defaulting to the most recent run, and _not_ an aggregation of previous runs.
- :::
-
- 
-
- Verify the optimized configuration of a given run by inspecting the resource usage plots for that run and these fields in the run's task table:
-
- | Description | Key |
- | ------------ | ---------------------- |
- | CPU usage | `pcpu` |
- | Memory usage | `peakRss` |
- | Runtime | `start` and `complete` |
-
-
-
diff --git a/platform-cloud/docs/quickstart.md b/platform-cloud/docs/quickstart.md
new file mode 100644
index 000000000..6f4c77d2f
--- /dev/null
+++ b/platform-cloud/docs/quickstart.md
@@ -0,0 +1,144 @@
+---
+title: "Explore Seqera Cloud"
+description: "Explore your free workspace resources and launch your first pipelines in Seqera Cloud."
+date created: "2025-10-16"
+toc_max_heading_level: 4
+tags: [pipelines, versioning, nextflow, parameters]
+---
+
+When you create a new Seqera Cloud account with a verified work email, Seqera automatically provisions starter resources on your first login. These resources give you everything you need to start running bioinformatics pipelines immediately, including a Seqera compute environment and $100 in free credits to launch pipelines and Studios.
+
+:::note
+Generic email domains like Gmail are not eligible for the free resources detailed in this guide.
+:::
+
+This guide shows you how to launch your first pipelines with the starter resources provided.
+
+## Your free resources
+
+When you first log in after verifying your email, Seqera automatically creates an organization and workspace for you. You can **Explore Platform** and look around your workspace while starter resources are provisioned in the background, or wait for the setup to complete. Resource provisioning typically takes under a minute. Once setup is complete, you'll see a banner confirming that starter resources are ready for you to start launching pipelines.
+
+Seqera provisions four types of resources to get you started:
+- A [Seqera Compute environment](./compute-envs/seqera-compute.md) with $100 in free credits
+- [Credentials](./credentials/overview.md) used by your compute environment to create and manage cloud resources on your behalf
+- A cloud storage bucket in [Data Explorer](./data/data-explorer.md)
+- Pre-configured nf-core pipelines, ready to launch
+
+### Seqera Compute environment
+
+Your organization workspace includes a pre-configured [Seqera Compute](https://docs.seqera.io/platform-cloud/compute-envs/seqera-compute) environment that requires no cloud account setup or configuration. This environment includes $100 in free credits that can be used to run pipelines or Studios.
+
+Credits are consumed based on the computational resources your pipeline runs and Studio session use, calculated from CPU-hours, GB-hours, and network and storage costs. You can monitor your credit balance in the **Usage overview** dropdown in the top navigation bar, or view detailed usage in your organization or workspace **Settings** tab.
+
+See [Credit management](./administration/credit-management) for more information on monitoring usage and requesting additional credits.
+
+### Data Explorer
+
+Your workspace includes an automatically provisioned cloud storage bucket in [Data Explorer](https://docs.seqera.io/platform-cloud/data/data-explorer), linked to your Seqera Compute environment. This bucket provides storage for pipeline outputs, intermediate files, and any data you want to browse or manage through the Seqera interface. Your organization includes 25 GB of free cloud storage.
+
+:::tip
+After completing pipeline test runs, delete working directory files and other data you no longer need to manage your cloud storage optimally.
+:::
+
+### Launchpad
+
+Your workspace Launchpad includes six pre-configured [nf-core](https://nf-co.re) pipelines, all set up with a `test` profile so you can launch them immediately with test data.
+
+#### nextflow-io/hello
+
+Nextflow's [Hello World](https://github.com/nextflow-io/hello) — a simple example pipeline that demonstrates basic Nextflow functionality. This pipeline is ideal for verifying that your compute setup is working correctly and for understanding how pipeline execution works in Seqera.
+
+**To launch this pipeline**:
+
+1. From the **Launchpad** in the left navigation menu, select **Launch** next to the **nextflow-io/hello** pipeline.
+1. While this pipeline requires no inputs to run, you can optionally explore the parameters in the launch form. For example, note the **Work directory** is pre-populated with your compute environment work directory path.
+1. Select **Launch**.
+
+#### nf-core/demultiplex
+
+The [nf-core/demultiplex](https://nf-co.re/demultiplex) pipeline separates pooled sequencing reads into individual samples based on barcode sequences. It supports Illumina sequencing data and can handle both single and dual indexing strategies.
+
+**Use case**: Sequencing facilities often pool multiple samples into a single sequencing run to reduce costs. This pipeline is used to separate the pooled data back into individual sample files based on the unique barcode assigned to each sample during library preparation.
+
+**To launch this pipeline**:
+
+1. From the **Launchpad** in the left navigation menu, select **Launch** next to the **nf-core/demultiplex** pipeline.
+1. From the **General config** tab, scroll down and copy your **Work directory** path. You can optionally enter a custom **Workflow run name** or create and add **Labels** to the run.
+1. From the **Run parameters** tab, scroll to the **outdir** field and paste your work directory path. It is recommended to add `/demultiplex/outdir` to the end, to keep your cloud storage organized.
+1. If the **input** field is not automatically populated, fetch and paste the example samplesheet URL from the [nf-core/demultiplex documentation](https://nf-co.re/demultiplex/latest/docs/usage#example-pipeline-samplesheet).
+1. Select **Launch**.
+
+#### nf-core/molkart
+
+The [nf-core/molkart](https://nf-co.re/molkart) pipeline performs spatial analysis of highly multiplexed tissue imaging data. It processes images from technologies like CODEX, CycIF, or IMC to segment cells, quantify marker expression, and analyze spatial relationships between cells.
+
+This pipeline is used in spatial biology and pathology research to understand how different cell types are organized in tissue and how they interact. For example, researchers studying tumor immunology use it to map where immune cells are located relative to cancer cells and analyze their spatial relationships.
+
+**To launch this pipeline**:
+
+1. From the **Launchpad** in the left navigation menu, select **Launch** next to the **nf-core/molkart** pipeline.
+1. From the **General config** tab, scroll down and copy your **Work directory** path. You can also optionally enter a custom **Workflow run name** or create and add **Labels** to the run.
+1. From the **Run parameters** tab, scroll to the **outdir** field and paste your work directory path. It is recommended to add `/molkart/outdir` to the end, to keep your cloud storage organized.
+1. If the **input** field is not automatically populated, fetch and paste the example samplesheet URL from the [nf-core/molkart documentation](https://nf-co.re/molkart/latest/docs/usage#full-samplesheet).
+1. Select **Launch**.
+
+#### nf-core/rnaseq
+
+RNA-seq is one of the most common applications in genomics research. Scientists use this pipeline to measure gene expression levels across different conditions, time points, or tissues. For example, researchers studying disease mechanisms might compare gene expression between healthy and diseased tissue to identify which genes are turned on or off.
+
+The [nf-core/rnaseq](https://nf-co.re/rnaseq) pipeline performs RNA sequencing analysis, from raw reads to gene expression quantification. It includes quality control, read alignment, transcript quantification, and quality metrics reporting.
+
+**To launch this pipeline**:
+
+1. From the **Launchpad** in the left navigation menu, select **Launch** next to the **nf-core/rnaseq** pipeline.
+1. From the **General config** tab, scroll down and copy your **Work directory** path. You can also optionally enter a custom **Workflow run name** or create and add **Labels** to the run.
+1. From the **Run parameters** tab, scroll to the **outdir** field and paste your work directory path. It is recommended to add `/rnaseq/outdir` to the end, to keep your cloud storage organized.
+1. If the **input** field is not automatically populated, fetch and paste the example samplesheet URL from the [nf-core/rnaseq documentation](https://nf-co.re/rnaseq/latest/docs/usage#full-samplesheet).
+1. Select **Launch**.
+
+#### nf-core/sarek
+
+The [nf-core/sarek](https://nf-co.re/sarek) pipeline performs variant calling and annotation from whole genome or targeted sequencing data. It detects germline and somatic variants, including SNVs, indels, and structural variants, and provides comprehensive annotation.
+
+This pipeline is widely used in cancer genomics and rare disease research. Clinical researchers use Sarek to identify disease-causing mutations in patient genomes, while cancer researchers use it to detect somatic mutations in tumor samples and compare them to normal tissue.
+
+**To launch this pipeline**:
+
+1. From the **Launchpad** in the left navigation menu, select **Launch** next to the **nf-core/sarek** pipeline.
+1. From the **General config** tab, scroll down and copy your **Work directory** path. You can also optionally enter a custom **Workflow run name** or create and add **Labels** to the run.
+1. From the **Run parameters** tab, scroll to the **outdir** field and paste your work directory path. It is recommended to add `/sarek/outdir` to the end, to keep your cloud storage organized.
+1. If the **input** field is not automatically populated, fetch and paste the example samplesheet URL from the [nf-core/sarek documentation](https://nf-co.re/sarek/latest/docs/usage#overview-samplesheet-columns).
+1. Select **Launch**.
+
+#### nf-core/scrnaseq
+
+The [nf-core/scrnaseq](https://nf-co.re/scrnaseq) pipeline processes single-cell RNA sequencing data. It performs read alignment, cell barcode and UMI quantification, quality control, and generates count matrices for downstream analysis.
+
+Single-cell RNA-seq allows researchers to measure gene expression in individual cells rather than bulk tissue. This pipeline is used to study cellular heterogeneity, identify rare cell populations, and understand how individual cells respond differently to treatments or disease states. For example, immunologists use it to characterize the diverse cell types within the immune system.
+
+**To launch this pipeline**:
+
+1. From the **Launchpad** in the left navigation menu, select **Launch** next to the **nf-core/scrnaseq** pipeline.
+1. From the **General config** tab, scroll down and copy your **Work directory** path. You can also optionally enter a custom **Workflow run name** or create and add **Labels** to the run.
+1. From the **Run parameters** tab, scroll to the **outdir** field and paste your work directory path. It is recommended to add `/scrnaseq/outdir` to the end, to keep your cloud storage organized.
+1. If the **input** field is not automatically populated, fetch and paste the example samplesheet URL from the [nf-core/scrnaseq documentation](https://nf-co.re/scrnaseq/latest/docs/usage#full-samplesheet).
+1. Select **Launch**.
+
+### Studios
+
+Seqera Studios provides cloud-based, on-demand development environments for interactive bioinformatics work. Studios are fully integrated with Seqera Platform and offer VS Code or JupyterLab interfaces with access to your pipeline data and compute resources.
+
+While your free workspace does not include an existing Studio, see [Studios for interactive analysis](https://docs.seqera.io/platform-cloud/studios/overview) to learn how to configure and run Studios on your Seqera Compute environment. The guide includes instructions for adding publicly available data to analyze in your Studios.
+
+## Next steps
+
+After launching your first pipelines, you can:
+- [Monitor run progress](./monitoring/run-details.mdx)
+- [Explore output data](./data/data-explorer.md)
+
+When you're ready to run pipelines and Studios with your own data, you can:
+- [Add data](./getting-started/quickstart-demo/add-data.md)
+- [Add new pipelines](./getting-started/quickstart-demo/add-pipelines.md)
+- [Add participants](./getting-started/workspace-setup.md) to collaborate with your team
+
+Contact the [Seqera community forum](https://community.seqera.io/) or ask [Seqera AI](https://seqera.io/ask-ai/chat-v2) if you encounter any unexpected issues or need assistance.
\ No newline at end of file
diff --git a/platform-enterprise_versioned_docs/version-23.2/getting-started/deployment-options.md b/platform-enterprise_versioned_docs/version-23.2/getting-started/deployment-options.md
index 51dce491f..6bb782f37 100644
--- a/platform-enterprise_versioned_docs/version-23.2/getting-started/deployment-options.md
+++ b/platform-enterprise_versioned_docs/version-23.2/getting-started/deployment-options.md
@@ -14,7 +14,7 @@ Tower is available in two deployment editions and can be accessed via web UI, [A
The hosted Cloud edition of Tower is available free of charge at [cloud.tower.nf](https://tower.nf/login) — log in with your GitHub or Google credentials.
-Cloud is recommended for users who are new to Tower. It's an ideal choice for individuals and organizations looking to set up quickly. The service is hosted by Seqera. See [Community showcase](https://docs.seqera.io/platform-cloud/getting-started/quickstart-demo/comm-showcase) for instructions to launch your first pipeline. Tower Cloud has a limit of five concurrent workflow runs per user.
+Cloud is recommended for users who are new to Tower. It's an ideal choice for individuals and organizations looking to set up quickly. The service is hosted by Seqera. See [Community showcase](https://docs.seqera.io/platform-cloud/quickstart) for instructions to launch your first pipeline. Tower Cloud has a limit of five concurrent workflow runs per user.

diff --git a/platform-enterprise_versioned_docs/version-23.3/getting-started/deployment-options.md b/platform-enterprise_versioned_docs/version-23.3/getting-started/deployment-options.md
index 539e7fe96..a0aff4f58 100644
--- a/platform-enterprise_versioned_docs/version-23.3/getting-started/deployment-options.md
+++ b/platform-enterprise_versioned_docs/version-23.3/getting-started/deployment-options.md
@@ -12,7 +12,7 @@ Seqera Platform is available in two deployment editions and can be accessed via
### Seqera Platform Cloud
-The hosted Seqera Cloud edition is recommended for users who are new to Seqera. It's an ideal choice for individuals and organizations looking to set up quickly. See [Community Showcase](https://docs.seqera.io/platform-cloud/getting-started/quickstart-demo/comm-showcase) for instructions to launch your first pipeline. Seqera Cloud has a limit of five concurrent workflow runs per user. It's available free of charge at [cloud.tower.nf](https://tower.nf/login).
+The hosted Seqera Cloud edition is recommended for users who are new to Seqera. It's an ideal choice for individuals and organizations looking to set up quickly. See [Community Showcase](https://docs.seqera.io/platform-cloud/quickstart) for instructions to launch your first pipeline. Seqera Cloud has a limit of five concurrent workflow runs per user. It's available free of charge at [cloud.tower.nf](https://tower.nf/login).
### Seqera Platform Enterprise
diff --git a/platform-enterprise_versioned_docs/version-23.3/getting-started/overview.md b/platform-enterprise_versioned_docs/version-23.3/getting-started/overview.md
index 5cab2b94e..dfa15a7c7 100644
--- a/platform-enterprise_versioned_docs/version-23.3/getting-started/overview.md
+++ b/platform-enterprise_versioned_docs/version-23.3/getting-started/overview.md
@@ -9,7 +9,7 @@ Seqera Platform is available in two [deployment editions](../getting-started/dep
## Community Showcase
-When you first log in to Seqera Platform, you land on the [Community Showcase](https://docs.seqera.io/platform-cloud/getting-started/quickstart-demo/comm-showcase) Launchpad. This is an example workspace provided by Seqera. It's pre-configured with pipelines, compute environments, and credentials to get you running Nextflow pipelines immediately. The pre-built community AWS Batch environments include 100 free hours of compute.
+When you first log in to Seqera Platform, you land on the [Community Showcase](https://docs.seqera.io/platform-cloud/quickstart) Launchpad. This is an example workspace provided by Seqera. It's pre-configured with pipelines, compute environments, and credentials to get you running Nextflow pipelines immediately. The pre-built community AWS Batch environments include 100 free hours of compute.
The Community Showcase consists of:
diff --git a/src/modules/Homepage/index.tsx b/src/modules/Homepage/index.tsx
index 34a5f6fd9..7efc72e69 100644
--- a/src/modules/Homepage/index.tsx
+++ b/src/modules/Homepage/index.tsx
@@ -120,7 +120,7 @@ export default function Home(): JSX.Element {