Skip to content

Bump the python group with 5 updates #21114

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Apr 1, 2025

Updates the requirements on tensorflow-cpu, tensorflow, torch, torch-xla and tensorflow[and-cuda] to permit the latest version.
Updates tensorflow-cpu to 2.18.1

Release notes

Sourced from tensorflow-cpu's releases.

TensorFlow 2.18.1

Release 2.18.1

Security

Bug Fixes and Other Changes

  • Loosen ml_dtypes upperbound to < 1.0.0 to reduce conflicts when installed with other ML ecosystem components.

Breaking Changes

  • tf.lite
    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
  • Tensorflow-tpu for this patch is skipped due to some sparsecore related bugs. We suggest to upgrade to 2.19.0 instead.
Changelog

Sourced from tensorflow-cpu's changelog.

Release 2.18.1

Security

Bug Fixes and Other Changes

  • Loosen ml_dtypes upperbound to < 1.0.0 to reduce conflicts when installed with other ML ecosystem components.

Breaking Changes

  • tf.lite
    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
  • Tensorflow-tpu for this patch is skipped due to some sparsecore related bugs. We suggest to upgrade to 2.19.0 instead.

Release 2.18.0

TensorFlow

Breaking Changes

  • tf.lite

    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
    • C API:
      • An optional, fourth parameter was added TfLiteOperatorCreate as a step forward towards a cleaner API for TfLiteOperator. Function TfLiteOperatorCreate was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
  • TensorRT support is disabled in CUDA builds for code health improvement.

  • Hermetic CUDA support is added.

    Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

Known Caveats

Major Features and Improvements

  • TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.
    • Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
    • Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
  • tf.lite:
    • The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
  • SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

  • tf.data
    • Add optional synchronous argument to map, to specify that the map should run synchronously, as opposed to be parallelizable when options.experimental_optimization.map_parallelization=True. This saves memory compared to setting num_parallel_calls=1.
    • Add optional use_unbounded_threadpool argument to map, to specify that the map should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.

... (truncated)

Commits
  • cb64295 Backport Windows builds into 2.18 for 2.18.1 (#88848)
  • 9e75d4b Merge pull request #88677 from tensorflow/chandrasekhard2-patch-1
  • bafd4e8 Update RELEASE.md
  • d980227 Update release notes for TensorFlow 2.18.1 (#87073)
  • 430ca7a Merge pull request #87087 from tensorflow-jenkins/version-numbers-2.18.1-5478
  • b37ff63 Update version numbers to 2.18.1
  • 500e03d Merge pull request #84966 from tensorflow/bump-mldtypes-2-18
  • 337b3f7 Bump ml-dtypes upper bound
  • 8241bac Merge pull request #80201 from tensorflow/r2.18-27ad610b6c7
  • 4399cf9 Add deletion warning to tf.lite.interpreter with a redirection notice to ai-e...
  • Additional commits viewable in compare view

Updates tensorflow to 2.18.1

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.18.1

Release 2.18.1

Security

Bug Fixes and Other Changes

  • Loosen ml_dtypes upperbound to < 1.0.0 to reduce conflicts when installed with other ML ecosystem components.

Breaking Changes

  • tf.lite
    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
  • Tensorflow-tpu for this patch is skipped due to some sparsecore related bugs. We suggest to upgrade to 2.19.0 instead.
Changelog

Sourced from tensorflow's changelog.

Release 2.18.1

Security

Bug Fixes and Other Changes

  • Loosen ml_dtypes upperbound to < 1.0.0 to reduce conflicts when installed with other ML ecosystem components.

Breaking Changes

  • tf.lite
    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
  • Tensorflow-tpu for this patch is skipped due to some sparsecore related bugs. We suggest to upgrade to 2.19.0 instead.

Release 2.18.0

TensorFlow

Breaking Changes

  • tf.lite

    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
    • C API:
      • An optional, fourth parameter was added TfLiteOperatorCreate as a step forward towards a cleaner API for TfLiteOperator. Function TfLiteOperatorCreate was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
  • TensorRT support is disabled in CUDA builds for code health improvement.

  • Hermetic CUDA support is added.

    Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

Known Caveats

Major Features and Improvements

  • TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.
    • Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
    • Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
  • tf.lite:
    • The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
  • SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

  • tf.data
    • Add optional synchronous argument to map, to specify that the map should run synchronously, as opposed to be parallelizable when options.experimental_optimization.map_parallelization=True. This saves memory compared to setting num_parallel_calls=1.
    • Add optional use_unbounded_threadpool argument to map, to specify that the map should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.

... (truncated)

Commits
  • cb64295 Backport Windows builds into 2.18 for 2.18.1 (#88848)
  • 9e75d4b Merge pull request #88677 from tensorflow/chandrasekhard2-patch-1
  • bafd4e8 Update RELEASE.md
  • d980227 Update release notes for TensorFlow 2.18.1 (#87073)
  • 430ca7a Merge pull request #87087 from tensorflow-jenkins/version-numbers-2.18.1-5478
  • b37ff63 Update version numbers to 2.18.1
  • 500e03d Merge pull request #84966 from tensorflow/bump-mldtypes-2-18
  • 337b3f7 Bump ml-dtypes upper bound
  • 8241bac Merge pull request #80201 from tensorflow/r2.18-27ad610b6c7
  • 4399cf9 Add deletion warning to tf.lite.interpreter with a redirection notice to ai-e...
  • Additional commits viewable in compare view

Updates torch from 2.5.1+cu121 to 2.6.0

Release notes

Sourced from torch's releases.

PyTorch 2.6.0 Release

  • Highlights
  • Tracked Regressions
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation
  • Developers

Highlights

We are excited to announce the release of PyTorch® 2.6 (release notes)! This release features multiple improvements for PT2: torch.compile can now be used with Python 3.13; new performance-related knob torch.compiler.set_stance; several AOTInductor enhancements. Besides the PT2 improvements, another highlight is FP16 support on X86 CPUs.

NOTE: Starting with this release we are not going to publish on Conda, please see [Announcement] Deprecating PyTorch’s official Anaconda channel for the details.

For this release the experimental Linux binaries shipped with CUDA 12.6.3 (as well as Linux Aarch64, Linux ROCm 6.2.4, and Linux XPU binaries) are built with CXX11_ABI=1 and are using the Manylinux 2.28 build platform. If you build PyTorch extensions with custom C++ or CUDA extensions, please update these builds to use CXX_ABI=1 as well and report any issues you are seeing. For the next PyTorch 2.7 release we plan to switch all Linux builds to Manylinux 2.28 and CXX11_ABI=1, please see [RFC] PyTorch next wheel build platform: manylinux-2.28 for the details and discussion.

Also in this release as an important security improvement measure we have changed the default value for weights_only parameter of torch.load. This is a backward compatibility-breaking change, please see this forum post for more details.

This release is composed of 3892 commits from 520 contributors since PyTorch 2.5. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve PyTorch. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.

... (truncated)

Commits

Updates torch-xla from 2.5.1 to 2.6.0

Release notes

Sourced from torch-xla's releases.

PyTorch/XLA 2.6 release

Highlights

Kernel improvements for vLLM: Multi-Queries Paged Attention Pallas Kernel

  • Added the multi-queries paged attention pallas kernel (#8328). Unlocks opportunities in vLLM such as prefix caching.
  • Perf improvement: only write to HBM at the last iteration (#8393)

Experimental scan operator (#7901)

Previously when you loop over many nn.Modules of the same structure in PyTorch/XLA, the loop will be unrolled during graph tracing, leading to giant computation graphs. This unrolling results in long compilation times, up to an hour for large language modules with many decoder layers. In this release we offer an experimental API to reduce compilation times called "scan", which mirrors the jax.lax.scan transform in JAX. When you replace a Python for loop with scan, instead of compiling every iteration individually, only the first iteration will be compiled, and the compiled HLO is reused for all subsequent iterations. Building upon torch_xla.experimental.scan, torch_xla.experimental.scan_layers offers a convenient interface for looping over a sequence of nn.Modules without unrolling.

Documentation: https://pytorch.org/xla/release/r2.6/features/scan.html

C++11 ABI builds

Starting from Pytorch/XLA 2.6, we'll provide wheels and docker images built with two C++ ABI flavors: C++11 and pre-C++11. Pre-C++11 is the default to align with PyTorch upstream, but C++11 ABI wheels and docker images have better lazy tensor tracing performance.

To install C++11 ABI flavored 2.6 wheels (Python 3.10 example):

pip install torch==2.6.0+cpu.cxx11.abi \
  https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.6.0%2Bcxx11-cp310-cp310-manylinux_2_28_x86_64.whl \
  'torch_xla[tpu]' \
  -f https://storage.googleapis.com/libtpu-releases/index.html \
  -f https://storage.googleapis.com/libtpu-wheels/index.html \
  -f https://download.pytorch.org/whl/torch

The above command works for Python 3.10. We additionally have Python 3.9 and 3.11 wheels:

To access C++11 ABI flavored docker image:

us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.6.0_3.10_tpuvm_cxx11

If your model is tracing bound (e.g. you see that the host CPU is busy tracing the model while TPUs are idle), switching to the C++11 ABI wheels/docker images can improve performance. Mixtral 8x7B benchmarking results on v5p-256, global batch size 1024:

  • Pre-C++11 ABI MFU: 33%
  • C++ ABI MFU: 39%

... (truncated)

Commits

Updates tensorflow[and-cuda] to 2.18.1

Release notes

Sourced from tensorflow[and-cuda]'s releases.

TensorFlow 2.18.1

Release 2.18.1

Security

Bug Fixes and Other Changes

  • Loosen ml_dtypes upperbound to < 1.0.0 to reduce conflicts when installed with other ML ecosystem components.

Breaking Changes

  • tf.lite
    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
  • Tensorflow-tpu for this patch is skipped due to some sparsecore related bugs. We suggest to upgrade to 2.19.0 instead.
Changelog

Sourced from tensorflow[and-cuda]'s changelog.

Release 2.18.1

Security

Bug Fixes and Other Changes

  • Loosen ml_dtypes upperbound to < 1.0.0 to reduce conflicts when installed with other ML ecosystem components.

Breaking Changes

  • tf.lite
    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
  • Tensorflow-tpu for this patch is skipped due to some sparsecore related bugs. We suggest to upgrade to 2.19.0 instead.

Release 2.18.0

TensorFlow

Breaking Changes

  • tf.lite

    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
    • C API:
      • An optional, fourth parameter was added TfLiteOperatorCreate as a step forward towards a cleaner API for TfLiteOperator. Function TfLiteOperatorCreate was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
  • TensorRT support is disabled in CUDA builds for code health improvement.

  • Hermetic CUDA support is added.

    Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

Known Caveats

Major Features and Improvements

  • TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.
    • Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
    • Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
  • tf.lite:
    • The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
  • SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

  • tf.data
    • Add optional synchronous argument to map, to specify that the map should run synchronously, as opposed to be parallelizable when options.experimental_optimization.map_parallelization=True. This saves memory compared to setting num_parallel_calls=1.
    • Add optional use_unbounded_threadpool argument to map, to specify that the map should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.

... (truncated)

Commits
  • cb64295 Backport Windows builds into 2.18 for 2.18.1 (#88848)
  • 9e75d4b Merge pull request #88677 from tensorflow/chandrasekhard2-patch-1
  • bafd4e8 Update RELEASE.md
  • d980227 Update release notes for TensorFlow 2.18.1 (#87073)
  • 430ca7a Merge pull request #87087 from tensorflow-jenkins/version-numbers-2.18.1-5478
  • b37ff63 Update version numbers to 2.18.1
  • 500e03d Merge pull request #84966 from tensorflow/bump-mldtypes-2-18
  • 337b3f7 Bump ml-dtypes upper bound
  • 8241bac Merge pull request #80201 from tensorflow/r2.18-27ad610b6c7
  • 4399cf9 Add deletion warning to tf.lite.interpreter with a redirection notice to ai-e...
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions

@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Apr 1, 2025
@gbaned gbaned requested a review from mattdangerw April 3, 2025 07:39
Updates the requirements on [tensorflow-cpu](https://github.com/tensorflow/tensorflow), [tensorflow](https://github.com/tensorflow/tensorflow), [torch](https://github.com/pytorch/pytorch), [torch-xla](https://github.com/pytorch/xla) and [tensorflow[and-cuda]](https://github.com/tensorflow/tensorflow) to permit the latest version.

Updates `tensorflow-cpu` to 2.18.1
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/v2.18.1/RELEASE.md)
- [Commits](tensorflow/tensorflow@v2.18.0...v2.18.1)

Updates `tensorflow` to 2.18.1
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/v2.18.1/RELEASE.md)
- [Commits](tensorflow/tensorflow@v2.18.0...v2.18.1)

Updates `torch` from 2.5.1+cu121 to 2.6.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/commits/v2.6.0)

Updates `torch-xla` from 2.5.1 to 2.6.0
- [Release notes](https://github.com/pytorch/xla/releases)
- [Commits](pytorch/xla@v2.5.1...v2.6.0)

Updates `tensorflow[and-cuda]` to 2.18.1
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/v2.18.1/RELEASE.md)
- [Commits](tensorflow/tensorflow@v2.18.0...v2.18.1)

---
updated-dependencies:
- dependency-name: tensorflow-cpu
  dependency-type: direct:production
  dependency-group: python
- dependency-name: tensorflow
  dependency-type: direct:production
  dependency-group: python
- dependency-name: torch
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: python
- dependency-name: torch-xla
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: python
- dependency-name: tensorflow[and-cuda]
  dependency-type: direct:production
  dependency-group: python
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot force-pushed the dependabot/pip/python-b884cca92d branch from fb0d156 to 86a9ff4 Compare April 4, 2025 20:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting review dependencies Pull requests that update a dependency file python Pull requests that update Python code size:S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants