diff --git a/pages/dashboards/templates/dashboards/index.html b/pages/dashboards/templates/dashboards/index.html index a1b87b8..549be7f 100644 --- a/pages/dashboards/templates/dashboards/index.html +++ b/pages/dashboards/templates/dashboards/index.html @@ -16,5 +16,8 @@

Available dashboards

  • SARS-CoV-2 Variant Competition
  • +
  • + Multi-disease serology +
  • {% endblock %} diff --git a/pages/dashboards/templates/dashboards/multidisease_serology.html b/pages/dashboards/templates/dashboards/multidisease_serology.html new file mode 100644 index 0000000..84ed12c --- /dev/null +++ b/pages/dashboards/templates/dashboards/multidisease_serology.html @@ -0,0 +1,163 @@ +{% extends "base.html" %} + +{% block content %} + Skip to content +

    Multi-disease serology

    +
    +
    +
    +

    All data last updated: TODO

    +
    + +

    Introduction

    +

    + The COVID-19 pandemic highlighted the importance of serological surveillance in tracking viral + transmission dynamics, understanding immune responses, guiding vaccination strategies, and assisting + in decisions related to public health. High‑throughput serological assays for SARS‑CoV‑2 were + developed very early in the pandemic at KTH and SciLifeLab to enable surveillance of populations + globally. For information about work done with SARS‑CoV‑2 during the pandemic, see the + historical background section. +

    +

    + As we move on from the pandemic, it is crucial that serological studies include more pathogens + than just SARS‑CoV‑2. One of the capabilities that is part of + SciLifeLab's Pandemic Laboratory Preparedness (PLP) programme, + named 'Multi‑disease serology' aims to create a sustainable, long‑term + resource enabling a broad, frequent, and large‑scale surveillance of serostatus. They are working to + create antibody repertoires that can be used to analyse thousands of samples on hundreds of antigens. + The project is led by Peter Nilsson at KTH Royal Institute of Technology and SciLifeLab. To learn more, + check out the pandemic preparedness resource page for this project. +

    + +

    Historical background

    +

    + During the pandemic, those working on the multi‑disease serology study created a high‑throughput + multiplex bead‑based serological assay for SARS‑CoV‑2 (see + Hober et al. (2021) (opens in a new tab) + and the Serology tests for SARS‑CoV‑2 at SciLifeLab Data Dashboard + for details). As of July 2023, the assay had been used to analyse over 250,000 samples, and contributed to + around 40 publications (opens in a new tab), + including studies on seroprevalence and on vaccine efficacy in immunocompromised individuals and individuals + with autoimmune diseases. Following the pandemic, the group began to refocus their efforts towards pandemic + preparedness, and began work to extend the assay to provide a platform for parallelised multi‑disease + serological studies, including a wide range of antigens representing various infectious diseases. The bead‑based + setup enables a stepwise addition of new proteins, allowing a continuous implementation of pathogen‑representing + antigens. +

    + +

    Current methods and progress

    +

    + The project has produced and evaluated many antigens. This includes a wide range of different variants of the + SARS‑CoV‑2 proteins, with a focus on the spike glycoprotein, also covering the majority of mutated variants. + They have also created spike representations of SARS, MERS, and the other four human coronaviruses causing + common cold (HKU1, OC43, NL63 and 229E). They have also produced influenza virus antigens representing the + glycoproteins haemagglutinin and neuraminidase. Here, they have initially focused on the variants present in the + trivalent vaccine for the season 2021–2022, which includes the A(H1N1)/Wisconsin, A(H3N2)/Cambodia, and + B(Victoria)/Washington strains. Furthermore, they have produced representations of Respiratory Syncytial Virus + (RSV), including two surface proteins (G and F) in two different strains. Antigens representing mpox have also + been generated and included in the current bead‑based antigen collection. +

    +

    + Other viral respiratory infections that are being monitored in Sweden include adenovirus, metapneumovirus, and + parainfluenza virus. These have also been added to the project. The project has designed representations of the + fibre protein of adenovirus B7, metapneumovirus proteins F and G of the strain CAN97‑83, and protein HN and F + for parainfluenza virus have been designed based on strain Washington/1957 and strain C39, respectively. +

    +

    + The proteins designed and produced created by the project to date are listed in the + below table. +

    + +

    Table of proteins created at KTH

    +

    + Proteins designed, expressed, purified, and characterised at the + KTH node of Protein Production Sweden (opens in a new tab), + a national research infrastructure funded by the Swedish Research Council. They have been expressed either in HEK + or CHO cells or in E. coli, with different affinity tags and either as fragments or full‑length proteins. +

    +
    + Rotating your phone will improve the view of this table. +
    +
    + + + + + {% for header in kth_headers %} + + {% endfor %} + + + + {% if kth_rows %} + {% for row in kth_rows %} + + {% for cell in row %} + + {% endfor %} + + {% endfor %} + {% else %} + + + + {% endif %} + +
    Proteins designed, expressed, purified, and characterised at KTH
    {{ header }}
    {{ cell }}
    No data available.
    +
    + +

    Ongoing work and collaborations

    +

    + The work of the project is now expanding into the area of flavivirus (Tick‑borne encephalitis virus, Zika virus, + Dengue virus, West Nile virus, Yellow fever virus, Japanese encephalitis virus) and herpesvirus (Epstein–Barr + virus, Varicella zoster virus, Herpes simplex virus, Cytomegalovirus). +

    +

    + The project is also collaborating with another SciLifeLab PLP project + “Systems‑level immunomonitoring to unravel immune response to a novel + pathogen”, headed by Petter Brodin (Karolinska Institutet, KI) and Jochen Schwenk (KTH), to include a wide + range of externally produced antigens representing a large part of the Swedish vaccination program, see list + below. +

    +

    + The multi‑disease serological assay is under constant development and will gradually be incorporated into two + SciLifeLab infrastructure units; Autoimmunity and Serology Profiling and + Affinity + Proteomics Stockholm. The goal is to provide a flexible and quickly adaptable assay for high‑throughput + multiplex studies on seroprevalence, available to both the Public Health Agency of Sweden and researchers in + academia and industry. +

    + +

    Externally produced antigens

    +
    + + + + + {% for header in external_headers %} + + {% endfor %} + + + + {% if external_rows %} + {% for row in external_rows %} + + {% for cell in row %} + + {% endfor %} + + {% endfor %} + {% else %} + + + + {% endif %} + +
    Externally produced antigens included in the multi‑disease serology assay
    {{ header }}
    {{ cell }}
    No data available.
    +
    +
    +
    +{% endblock %} + diff --git a/pages/dashboards/urls.py b/pages/dashboards/urls.py index 7adbed4..ed689d7 100644 --- a/pages/dashboards/urls.py +++ b/pages/dashboards/urls.py @@ -1,9 +1,13 @@ +# ruff: noqa: E501 from django.urls import path -from .views import DashboardsIndex, LineageCompetition +from .views import DashboardsIndex, LineageCompetition, MultiDiseaseSerology app_name = "dashboards" +# fmt: off urlpatterns = [ path("", DashboardsIndex.as_view(), name="index"), path("lineage-competition/", LineageCompetition.as_view(), name="lineage_competition"), + path("multidisease-serology/", MultiDiseaseSerology.as_view(), name="multidisease_serology"), ] +# fmt: on diff --git a/pages/dashboards/utils/blobserver.py b/pages/dashboards/utils/blobserver.py new file mode 100644 index 0000000..7052072 --- /dev/null +++ b/pages/dashboards/utils/blobserver.py @@ -0,0 +1,160 @@ +"""Utilities for fetching and parsing serology data from blobserver. + +This module provides small helpers to fetch remote Excel files with retry +logic and parse the first worksheet into a list of dictionary records that can +be consumed by Django views and templates. +""" + +from __future__ import annotations + +import io +import logging +from typing import Any, Dict, List, Optional + +import httpx +from openpyxl import load_workbook + + +DEFAULT_TIMEOUT = httpx.Timeout(10.0, connect=5.0) +DEFAULT_HEADERS = {"User-Agent": "pathogens-portal/requests"} + +logger = logging.getLogger(__name__) + + +def get_with_retries( + url: str, + retries: int = 3, + timeout: Optional[httpx.Timeout] = None, + headers: Optional[Dict[str, str]] = None, + user_agent: Optional[str] = None, +) -> httpx.Response: + """GET a URL with retry and timeout policy. + + Args: + url: Absolute URL to fetch. + retries: Number of attempts before failing. + timeout: Optional custom timeout. If not provided, a sensible default + with separate connect timeout is used. + headers: Optional HTTP headers to include in the request. These + headers will override defaults if keys overlap. + user_agent: Optional User-Agent string. If provided, it overrides the + default; if ``headers`` also contains ``User-Agent``, that takes + precedence over this argument. + + Returns: + httpx.Response: Successful response object with a 2xx status. + + Raises: + RuntimeError: If all retry attempts fail. The original exception from + the last attempt is chained for debugging context. + """ + timeout = timeout or DEFAULT_TIMEOUT + # Build final headers with precedence: DEFAULT < user_agent arg < headers arg + final_headers: Dict[str, str] = dict(DEFAULT_HEADERS) + if user_agent: + final_headers["User-Agent"] = user_agent + if headers: + final_headers.update(headers) + last_error: Optional[Exception] = None + for attempt in range(1, retries + 1): + logger.debug( + "http_get_attempt", + extra={ + "url": url, + "attempt": attempt, + "retries": retries, + "user_agent": final_headers.get("User-Agent"), + }, + ) + try: + with httpx.Client(timeout=timeout, headers=final_headers) as client: + response = client.get(url) + response.raise_for_status() + logger.debug( + "http_get_success", + extra={"url": url, "status_code": response.status_code, "content_length": len(response.content)}, + ) + return response + except Exception as exc: + last_error = exc + logger.warning( + "http_get_retry", + extra={"url": url, "attempt": attempt, "retries": retries}, + exc_info=True, + ) + logger.error("http_get_failed", extra={"url": url, "retries": retries, "error_type": type(last_error).__name__}) + raise RuntimeError( + f"Failed to fetch URL after {retries} attempts: {url!r}" + ) from last_error + + +def fetch_excel_first_sheet_as_records( + url: str, + retries: int = 3, + headers: Optional[Dict[str, str]] = None, + user_agent: Optional[str] = None, +) -> List[Dict[str, Any]]: + """Parse the first worksheet of an Excel file into record dictionaries. + + The first row is treated as the header row. Blank header cells are ignored, + and completely empty data rows are skipped. Values are normalised so that + missing cells become empty strings. + + Args: + url: Absolute URL of the Excel file to download. + retries: Number of fetch attempts before failing. + headers: Optional HTTP headers forwarded to the download request. + user_agent: Optional User-Agent string to use for the request when + ``headers`` does not already define one. + + Returns: + List[Dict[str, Any]]: One dictionary per non-empty row, limited to the + non-blank headers in the first row. + """ + logger.debug("excel_fetch_start", extra={"url": url}) + response = get_with_retries(url, retries=retries, headers=headers, user_agent=user_agent) + workbook = load_workbook( + io.BytesIO(response.content), read_only=True, data_only=True + ) + first_sheet_name = workbook.sheetnames[0] + worksheet = workbook[first_sheet_name] + + row_iterator = worksheet.iter_rows(values_only=True) + + try: + header_row = next(row_iterator) + except StopIteration: + logger.info("excel_no_rows", extra={"url": url, "sheet": first_sheet_name}) + return [] + + headers: List[str] = [] + for header_cell in header_row: + # Normalise headers to non-empty strings + header_text = "" if header_cell is None else str(header_cell).strip() + headers.append(header_text) + + normalised_headers = [h for h in headers if h] + logger.debug( + "excel_headers_parsed", + extra={"url": url, "sheet": first_sheet_name, "header_count": len(normalised_headers)}, + ) + + records: List[Dict[str, Any]] = [] + for data_row in row_iterator: + # Build a dict limited to normalised headers + record: Dict[str, Any] = {} + for index, header in enumerate(headers): + if not header: + continue + value = data_row[index] if index < len(data_row) else None + record[header] = "" if value is None else value + # Skip fully empty rows + if any(value not in ("", None) for value in record.values()): + # Keep only normalised headers (drop any blanks) + records.append({key: record.get(key, "") for key in normalised_headers}) + + logger.info( + "excel_rows_parsed", + extra={"url": url, "sheet": first_sheet_name, "row_count": len(records)}, + ) + return records diff --git a/pages/dashboards/views/__init__.py b/pages/dashboards/views/__init__.py new file mode 100644 index 0000000..fa8f188 --- /dev/null +++ b/pages/dashboards/views/__init__.py @@ -0,0 +1,15 @@ +"""Public view exports for the dashboards app. + +Re-exports view classes to provide a stable import surface for URL configs +and other modules. +""" + +from .index import DashboardsIndex +from .lineage_competition import LineageCompetition +from .multidisease_serology import MultiDiseaseSerology + +__all__ = [ + "DashboardsIndex", + "LineageCompetition", + "MultiDiseaseSerology", +] diff --git a/pages/dashboards/views/index.py b/pages/dashboards/views/index.py new file mode 100644 index 0000000..fe949bb --- /dev/null +++ b/pages/dashboards/views/index.py @@ -0,0 +1,11 @@ +from utils.views import BaseTemplateView + + +class DashboardsIndex(BaseTemplateView): + """Index page for Dashboard + + WIP: currently a simple templateview but will be updated later + """ + + template_name = "dashboards/index.html" + title = "Data dashboards" diff --git a/pages/dashboards/views.py b/pages/dashboards/views/lineage_competition.py similarity index 64% rename from pages/dashboards/views.py rename to pages/dashboards/views/lineage_competition.py index a9bf94b..8a5b4c4 100644 --- a/pages/dashboards/views.py +++ b/pages/dashboards/views/lineage_competition.py @@ -1,16 +1,6 @@ from utils.views import BaseTemplateView -class DashboardsIndex(BaseTemplateView): - """Index page for Dashboard - - WIP: currently a simple templateview but will be updated later - """ - - template_name = "dashboards/index.html" - title = "Data dashboards" - - class LineageCompetition(BaseTemplateView): """SARS-CoV-2 Variant Competition dashboard page.""" diff --git a/pages/dashboards/views/multidisease_serology.py b/pages/dashboards/views/multidisease_serology.py new file mode 100644 index 0000000..e2e6038 --- /dev/null +++ b/pages/dashboards/views/multidisease_serology.py @@ -0,0 +1,120 @@ +"""Views for the multi-disease serology dashboard. + +This module defines the `MultiDiseaseSerology` view which renders a dashboard +by fetching two Excel files from blobserver, parsing them server-side, and +exposing normalised table data to the template. +""" + +from __future__ import annotations + +from typing import Any, Dict, List +import logging + +from utils.views import BaseTemplateView +from pages.dashboards.utils.blobserver import fetch_excel_first_sheet_as_records + + +KTH_XLSX_URL = "https://blobserver.dc.scilifelab.se/blob/KTH-produced-antigens.xlsx" +EXTERNAL_XLSX_URL = ( + "https://blobserver.dc.scilifelab.se/blob/External-PLP-proteinlist.xlsx" +) + +# Explicit column orders for deterministic display +KTH_HEADERS: List[str] = ["Virus type", "Variant", "Protein", "Details", "Host"] +EXTERNAL_HEADERS: List[str] = ["Pathogen", "Variant", "Protein", "Details", "Host"] + +logger = logging.getLogger(__name__) + + +class MultiDiseaseSerology(BaseTemplateView): + """ + Interim dashboard page: + - Fetch two Excel sheets from blobserver on each request. + - Parse into records server-side. + - Normalise into row lists aligned to headers to keep the template simple. + - No 'last updated' yet (will come from DB later). + """ + + template_name = "dashboards/multidisease_serology.html" + title = "Multi-disease serology" + + def _normalise_records_to_rows( + self, records: List[Dict[str, Any]], headers: List[str] + ) -> List[List[Any]]: + """Normalise dict records into deterministic row lists. + + Args: + records: List of dictionary records parsed from Excel. + headers: Column headers that define the order of values in each row. + + Returns: + List[List[Any]]: Rows where each row is ordered according to + ``headers``. Missing keys are represented as empty strings to keep + the rendering safe and consistent. + """ + normalised_rows: List[List[Any]] = [] + for record in records: + row: List[Any] = [] + for header in headers: + row.append(record.get(header, "")) + normalised_rows.append(row) + return normalised_rows + + def get_context_data(self, **kwargs: Any) -> Dict[str, Any]: + """Build template context with normalised serology tables. + + Args: + **kwargs: Standard Django context keyword arguments. + + Returns: + Dict[str, Any]: Context including headers and row data for the + KTH-produced and externally produced antigens tables. + """ + context = super().get_context_data(**kwargs) + + # KTH-produced antigens (server-side parsed) + try: + logger.info("serology_fetch_start", extra={"source": "kth", "url": KTH_XLSX_URL}) + kth_records = fetch_excel_first_sheet_as_records( + KTH_XLSX_URL, + user_agent="pathogens-portal/multidisease-serology", + ) + logger.info("serology_fetch_success", extra={"source": "kth", "records": len(kth_records)}) + except Exception: + logger.exception("serology_fetch_failed", extra={"source": "kth", "url": KTH_XLSX_URL}) + kth_records = [] + kth_rows_aligned = self._normalise_records_to_rows(kth_records, KTH_HEADERS) + + # Externally produced antigens (server-side parsed) + try: + logger.info("serology_fetch_start", extra={"source": "external", "url": EXTERNAL_XLSX_URL}) + external_records = fetch_excel_first_sheet_as_records( + EXTERNAL_XLSX_URL, + user_agent="pathogens-portal/multidisease-serology", + ) + logger.info("serology_fetch_success", extra={"source": "external", "records": len(external_records)}) + except Exception: + logger.exception("serology_fetch_failed", extra={"source": "external", "url": EXTERNAL_XLSX_URL}) + external_records = [] + external_rows_aligned = self._normalise_records_to_rows( + external_records, EXTERNAL_HEADERS + ) + + logger.debug( + "serology_context_built", + extra={ + "kth_rows": len(kth_rows_aligned), + "external_rows": len(external_rows_aligned), + "kth_headers": len(KTH_HEADERS), + "external_headers": len(EXTERNAL_HEADERS), + }, + ) + context.update( + { + "kth_headers": KTH_HEADERS, + "kth_rows": kth_rows_aligned, + "external_headers": EXTERNAL_HEADERS, + "external_rows": external_rows_aligned, + } + ) + return context diff --git a/pyproject.toml b/pyproject.toml index 36cecf5..9162f05 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -7,7 +7,9 @@ requires-python = ">=3.13" dependencies = [ "django>=5.2.4", "django-environ>=0.12.0", + "httpx>=0.28.1", "markdown>=3.5.0", + "openpyxl>=3.1.5", "pillow>=10.0.0", ] diff --git a/uv.lock b/uv.lock index 7c22346..5fb87c6 100644 --- a/uv.lock +++ b/uv.lock @@ -1,7 +1,20 @@ version = 1 -revision = 3 +revision = 2 requires-python = ">=3.13" +[[package]] +name = "anyio" +version = "4.11.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "idna" }, + { name = "sniffio" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/c6/78/7d432127c41b50bccba979505f272c16cbcadcc33645d5fa3a738110ae75/anyio-4.11.0.tar.gz", hash = "sha256:82a8d0b81e318cc5ce71a5f1f8b5c4e63619620b63141ef8c995fa0db95a57c4", size = 219094, upload-time = "2025-09-23T09:19:12.58Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/15/b3/9b1a8074496371342ec1e796a96f99c82c945a339cd81a8e73de28b4cf9e/anyio-4.11.0-py3-none-any.whl", hash = "sha256:0287e96f4d26d4149305414d4e3bc32f0dcd0862365a4bddea19d7a1ec38c4fc", size = 109097, upload-time = "2025-09-23T09:19:10.601Z" }, +] + [[package]] name = "asgiref" version = "3.9.1" @@ -11,6 +24,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/7c/3c/0464dcada90d5da0e71018c04a140ad6349558afb30b3051b4264cc5b965/asgiref-3.9.1-py3-none-any.whl", hash = "sha256:f3bba7092a48005b5f5bacd747d36ee4a5a61f4a269a6df590b43144355ebd2c", size = 23790, upload-time = "2025-07-08T09:07:41.548Z" }, ] +[[package]] +name = "certifi" +version = "2025.10.5" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/4c/5b/b6ce21586237c77ce67d01dc5507039d444b630dd76611bbca2d8e5dcd91/certifi-2025.10.5.tar.gz", hash = "sha256:47c09d31ccf2acf0be3f701ea53595ee7e0b8fa08801c6624be771df09ae7b43", size = 164519, upload-time = "2025-10-05T04:12:15.808Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e4/37/af0d2ef3967ac0d6113837b44a4f0bfe1328c2b9763bd5b1744520e5cfed/certifi-2025.10.5-py3-none-any.whl", hash = "sha256:0f212c2744a9bb6de0c56639a6f68afe01ecd92d91f14ae897c4fe7bbeeef0de", size = 163286, upload-time = "2025-10-05T04:12:14.03Z" }, +] + [[package]] name = "django" version = "5.2.6" @@ -59,6 +81,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/64/96/d967ca440d6a8e3861120f51985d8e5aec79b9a8bdda16041206adfe7adc/django_extensions-4.1-py3-none-any.whl", hash = "sha256:0699a7af28f2523bf8db309a80278519362cd4b6e1fd0a8cd4bf063e1e023336", size = 232980, upload-time = "2025-04-11T01:15:37.701Z" }, ] +[[package]] +name = "et-xmlfile" +version = "2.0.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/d3/38/af70d7ab1ae9d4da450eeec1fa3918940a5fafb9055e934af8d6eb0c2313/et_xmlfile-2.0.0.tar.gz", hash = "sha256:dab3f4764309081ce75662649be815c4c9081e88f0837825f90fd28317d4da54", size = 17234, upload-time = "2024-10-25T17:25:40.039Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c1/8b/5fe2cc11fee489817272089c4203e679c63b570a5aaeb18d852ae3cbba6a/et_xmlfile-2.0.0-py3-none-any.whl", hash = "sha256:7a91720bc756843502c3b7504c77b8fe44217c85c537d85037f0f536151b2caa", size = 18059, upload-time = "2024-10-25T17:25:39.051Z" }, +] + [[package]] name = "gunicorn" version = "23.0.0" @@ -71,6 +102,52 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/cb/7d/6dac2a6e1eba33ee43f318edbed4ff29151a49b5d37f080aad1e6469bca4/gunicorn-23.0.0-py3-none-any.whl", hash = "sha256:ec400d38950de4dfd418cff8328b2c8faed0edb0d517d3394e457c317908ca4d", size = 85029, upload-time = "2024-08-10T20:25:24.996Z" }, ] +[[package]] +name = "h11" +version = "0.16.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250, upload-time = "2025-04-24T03:35:25.427Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" }, +] + +[[package]] +name = "httpcore" +version = "1.0.9" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "certifi" }, + { name = "h11" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484, upload-time = "2025-04-24T22:06:22.219Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784, upload-time = "2025-04-24T22:06:20.566Z" }, +] + +[[package]] +name = "httpx" +version = "0.28.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "anyio" }, + { name = "certifi" }, + { name = "httpcore" }, + { name = "idna" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload-time = "2024-12-06T15:37:23.222Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" }, +] + +[[package]] +name = "idna" +version = "3.10" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490, upload-time = "2024-09-15T18:07:39.745Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442, upload-time = "2024-09-15T18:07:37.964Z" }, +] + [[package]] name = "markdown" version = "3.9" @@ -108,6 +185,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739, upload-time = "2024-10-18T15:21:42.784Z" }, ] +[[package]] +name = "openpyxl" +version = "3.1.5" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "et-xmlfile" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/3d/f9/88d94a75de065ea32619465d2f77b29a0469500e99012523b91cc4141cd1/openpyxl-3.1.5.tar.gz", hash = "sha256:cf0e3cf56142039133628b5acffe8ef0c12bc902d2aadd3e0fe5878dc08d1050", size = 186464, upload-time = "2024-06-28T14:03:44.161Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c0/da/977ded879c29cbd04de313843e76868e6e13408a94ed6b987245dc7c8506/openpyxl-3.1.5-py2.py3-none-any.whl", hash = "sha256:5282c12b107bffeef825f4617dc029afaf41d0ea60823bbb665ef3079dc79de2", size = 250910, upload-time = "2024-06-28T14:03:41.161Z" }, +] + [[package]] name = "packaging" version = "25.0" @@ -255,6 +344,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/e1/a3/03216a6a86c706df54422612981fb0f9041dbb452c3401501d4a22b942c9/ruff-0.13.0-py3-none-win_arm64.whl", hash = "sha256:ab80525317b1e1d38614addec8ac954f1b3e662de9d59114ecbf771d00cf613e", size = 12312357, upload-time = "2025-09-10T16:25:35.595Z" }, ] +[[package]] +name = "sniffio" +version = "1.3.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/a2/87/a6771e1546d97e7e041b6ae58d80074f81b7d5121207425c964ddf5cfdbd/sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc", size = 20372, upload-time = "2024-02-25T23:20:04.057Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235, upload-time = "2024-02-25T23:20:01.196Z" }, +] + [[package]] name = "sqlparse" version = "0.5.3" @@ -271,7 +369,9 @@ source = { virtual = "." } dependencies = [ { name = "django" }, { name = "django-environ" }, + { name = "httpx" }, { name = "markdown" }, + { name = "openpyxl" }, { name = "pillow" }, ] @@ -302,7 +402,9 @@ prod = [ requires-dist = [ { name = "django", specifier = ">=5.2.4" }, { name = "django-environ", specifier = ">=0.12.0" }, + { name = "httpx", specifier = ">=0.28.1" }, { name = "markdown", specifier = ">=3.5.0" }, + { name = "openpyxl", specifier = ">=3.1.5" }, { name = "pillow", specifier = ">=10.0.0" }, ]