Skip to content

Releases: openvinotoolkit/openvino

2025.1.0

10 Apr 08:57
6fec065
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More GenAI coverage and framework integrations to minimize code changes

    • New models supported: Phi-4 Mini, Jina CLIP v1, and Bce Embedding Base v1.
    • OpenVINO™ Model Server now supports VLM models, including Qwen2-VL, Phi-3.5-Vision, and InternVL2.
    • OpenVINO GenAI now includes image-to-image and inpainting features for transformer-based pipelines, such as Flux.1 and Stable Diffusion 3 models, enhancing their ability to generate more realistic content.
    • Preview: AI Playground now utilizes the OpenVINO Gen AI backend to enable highly optimized inferencing performance on AI PCs.
  • Broader LLM model support and more model compression techniques

    • Reduced binary size through optimization of the CPU plugin and removal of the GEMM kernel.
    • Optimization of new kernels for the GPU plugin significantly boosts the performance of Long Short-Term Memory (LSTM) models, used in many applications, including speech recognition, language modeling, and time series forecasting.
    • Preview: Token Eviction implemented in OpenVINO GenAI to reduce the memory consumption of KV Cache by eliminating unimportant tokens. This current Token Eviction implementation is beneficial for tasks where a long sequence is generated, such as chatbots and code generation.
    • NPU acceleration for text generation is now enabled in OpenVINO™ Runtime and OpenVINO™ Model Server to support the power-efficient deployment of VLM models on NPUs for AI PC use cases with low concurrency.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • Support for the latest Intel® Core™ processors (Series 2, formerly codenamed Bartlett Lake), Intel® Core™ 3 Processor N-series and Intel® Processor N-series (formerly codenamed Twin Lake) on Windows.
    • Additional LLM performance optimizations on Intel® Core™ Ultra 200H series processors for improved 2nd token latency on Windows and Linux.
    • Enhanced performance and efficient resource utilization with the implementation of Paged Attention and Continuous Batching by default in the GPU plugin.
    • Preview: The new OpenVINO backend for Executorch will enable accelerated inference and improved performance on Intel hardware, including CPUs, GPUs, and NPUs.

Support Change and Deprecation Notices

  • Discontinued in 2025:
    • Runtime components:

      • The OpenVINO property of Affinity API is no longer available. It has been replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).
    • Tools:

      • The OpenVINO™ Development Tools package (pip install openvino-dev) is no longer available for OpenVINO releases in 2025.
      • Model Optimizer is no longer available. Consider using the new conversion methods instead. For more details, see the model conversion transition guide.
      • Intel® Streaming SIMD Extensions (Intel® SSE) are currently not enabled in the binary package by default. They are still supported in the source code form.
      • Legacy prefixes: l_, w_, and m_ have been removed from OpenVINO archive names.
    • OpenVINO GenAI:

      • StreamerBase::put(int64_t token)
      • The Bool value for Callback streamer is no longer accepted. It must now return one of three values of StreamingStatus enum.
      • ChunkStreamerBase is deprecated. Use StreamerBase instead.
    • NNCF create_compressed_model() method is now deprecated. nncf.quantize() method is recommended for Quantization-Aware Training of PyTorch and TensorFlow models.
    • OpenVINO Model Server (OVMS) benchmark client in C++ using TensorFlow Serving API.
  • Deprecated and to be removed in the future:
    • openvino.Type.undefined is now deprecated and will be removed with version 2026.0. openvino.Type.dynamic should be used instead.
    • APT & YUM Repositories Restructure: Starting with release 2025.1, users can switch to the new repository structure for APT and YUM, which no longer uses year-based subdirectories (like “2025”). The old (legacy) structure will still be available until 2026, when the change will be finalized. Detailed instructions are available on the relevant documentation pages:
    • OpenCV binaries will be removed from Docker images in 2026.
    • Ubuntu 20.04 support will be deprecated in future OpenVINO releases due to the end of standard support.
    • “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the future. OpenVINO’s dynamic shape models are recommended instead.
    • MacOS x86 is no longer recommended for use due to the discontinuation of validation. Full support will be removed later in 2025.
    • The openvino namespace of the OpenVINO Python API has been redesigned, removing the nested openvino.runtime module. The old namespace is now considered deprecated and will be discontinued in 2026.0.

You can find OpenVINO™ toolkit 2025.1 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@11happy
@arkhamHack
@AsVoider
@chiruu12
@darshil929
@geeky33
@itsbharatj
@jpy794
@kuanxian1
@Mohamed-Ashraf273
@nikolasavic3
@oToToT
@SaifMohammed22
@srinjoydutta03

Release documentation is available here: https://docs.openvino.ai/2025
Release Notes are available here: https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html

2025.0.0

06 Feb 12:28
1f68be9
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More GenAI coverage and framework integrations to minimize code changes

    • New models supported: Qwen 2.5, Deepseek-R1-Distill-Llama-8B, DeepSeek-R1-Distill-Qwen-7B, and DeepSeek-R1-Distill-Qwen-1.5B, FLUX.1 Schnell and FLUX.1 Dev
    • Whisper Model: Improved performance on CPUs, built-in GPUs, and discrete GPUs with GenAI API.
    • Preview: Introducing NPU support for torch.compile, giving developers the ability to use the OpenVINO backend to run the PyTorch API on NPUs. 300+ deep learning models enabled from the TorchVision, Timm, and TorchBench repositories..
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • Preview: Addition of Prompt Lookup to GenAI API improves 2nd token latency for LLMs by effectively utilizing predefined prompts that match the intended use case.
    • Preview: The GenAI API now offers image-to-image inpainting functionality. This feature enables models to generate realistic content by inpainting specified modifications and seamlessly integrating them with the original image.
    • Asymmetric KV Cache compression is now enabled for INT8 on CPUs, resulting in lower memory consumption and improved 2nd token latency, especially when dealing with long prompts that require significant memory. The option should be explicitly specified by the user.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • Support for the latest Intel® Core™ Ultra 200H series processors (formerly codenamed Arrow Lake-H)
    • Integration of the OpenVINO ™ backend with the Triton Inference Server allows developers to utilize the Triton server for enhanced model serving performance when deploying on Intel CPUs.
    • Preview: A new OpenVINO ™ backend integration allows developers to leverage OpenVINO performance optimizations directly within Keras 3 workflows for faster AI inference on CPUs, built-in GPUs, discrete GPUs, and NPUs. This feature is available with the latest Keras 3.8 release.
    • The OpenVINO Model Server now supports native Windows Server deployments, allowing developers to leverage better performance by eliminating container overhead and simplifying GPU deployment.

Support Change and Deprecation Notices

  • Now deprecated:
    • Legacy prefixes l_, w_, and m_ have been removed from OpenVINO archive names.
    • The runtime namespace for Python API has been marked as deprecated and designated to be removed for 2026.0. The new namespace structure has been delivered, and migration is possible immediately. Details will be communicated through warnings and via documentation.
    • NNCF create_compressed_model() method is deprecated. nncf.quantize() method is now recommended for Quantization-Aware Training of PyTorch and TensorFlow models.

You can find OpenVINO™ toolkit 2025.0 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@0xfedcafe
@11happy
@cocoshe
@emir05051
@geeky33
@h6197627
@hub-bla
@Manideep-Kanna
@nashez
@nashez
@shivam5522
@sumhaj
@vatsalashanubhag
@xyz-harshal

Release documentation is available here: https://docs.openvino.ai/2025
Release Notes are available here: https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html

2024.6.0

19 Dec 12:30
4c0f47d
Compare
Choose a tag to compare

Summary of major features and improvements  

  • OpenVINO 2024.6 release includes updates for enhanced stability and improved LLM performance.

  • Introduced support for Intel® Arc™ B-Series Graphics (formerly known as Battlemage).

  • Implemented optimizations to improve the inference time and LLM performance on NPUs.

  • Improved LLM performance with GenAI API optimizations and bug fixes.

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
  • Deprecated and to be removed in the future:
    • The macOS x86_64 debug bins will no longer be provided with the OpenVINO toolkit, starting with OpenVINO 2024.5.
    • Python 3.8 is no longer supported, starting with OpenVINO 2024.5.
      • As MxNet doesn’t support Python version higher than 3.8, according to the MxNet PyPI project, it is no longer supported by OpenVINO, either.
    • Discrete Keem Bay support is no longer supported, starting with OpenVINO 2024.5.
    • Support for discrete devices (formerly codenamed Raptor Lake) is no longer available for NPU.

You can find OpenVINO™ toolkit 2024.6 release here:

Release documentation is available here: https://docs.openvino.ai/2024
Release Notes are available here: https://docs.openvino.ai/2024/about-openvino/release-notes-openvino.html

2024.5.0

20 Nov 13:12
db64e5c
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More Gen AI coverage and framework integrations to minimize code changes

    • New models supported: Llama 3.2 (1B & 3B), Gemma 2 (2B & 9B), and YOLO11.
    • LLM support on NPU: Llama 3 8B, Llama 2 7B, Mistral-v0.2-7B, Qwen2-7B-Instruct and Phi-3
    • Noteworthy notebooks added: Sam2, Llama3.2, Llama3.2 - Vision, Wav2Lip, Whisper, and Llava.
    • Preview: support for Flax, a high-performance Python neural network library based on JAX. Its modular design allows for easy customization and accelerated inference on GPUs.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • Optimizations for built-in GPUs on Intel® Core™ Ultra Processors (Series 1) and Intel® Arc™ Graphics include KV Cache compression for memory reduction along with improved usability, and model load time optimizations to improve first token latency for LLMs..
    • Dynamic quantization was enabled to improve first token latency for LLMs on built-in Intel® GPUs without impacting accuracy on Intel® Core™ Ultra Processors (Series 1). Second token latency will also improve for large batch inference.
    • A new method to generate synthetic text data is implemented in the Neural Network Compression Framework (NNCF). This will allow LLMs to be compressed more accurately using data-aware methods without datasets. Coming soon: This feature will soon be accessible via Optimum Intel on Hugging Face.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • Support for Intel® Xeon® 6 Processors with P-cores (formerly codenamed Granite Rapids) and Intel® Core™ Ultra 200V series processors (formerly codenamed Arrow Lake-S).
    • Preview: GenAI API enables multimodal AI deployment with support for multimodal pipelines for improved contextual awareness, transcription pipelines for easy audio-to-text conversions, and image generation pipelines for streamlined text-to-visual conversions..
    • Speculative decoding feature added to the GenAI API for improved performance and efficient text generation using a small draft model that is periodically corrected by the full-size model.
    • Preview: LoRA adapters are now supported in the GenAI API for developers to quickly and efficiently customize image and text generation models for specialized tasks.
    • The GenAI API now also supports LLMs on NPU allowing developers to specify NPU as the target device, specifically for WhisperPipeline (for whisper-base, whisper-medium, and whisper-small) and LLMPipeline (for Llama 3 8B, Llama 2 7B, Mistral-v0.2-7B, Qwen2-7B-Instruct and Phi-3 Mini-instruct). Use driver version 32.0.100.3104 or later for best performance.

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
  • Deprecated and to be removed in the future:
    • The macOS x86_64 debug bins will no longer be provided with the OpenVINO toolkit, starting with OpenVINO 2024.5.
    • Python 3.8 is no longer supported, starting with OpenVINO 2024.5.
      • As MxNet doesn’t support Python version higher than 3.8, according to the MxNet PyPI project, it is no longer supported by OpenVINO, either.
    • Discrete Keem Bay support is no longer supported, starting with OpenVINO 2024.5.
    • Support for discrete devices (formerly codenamed Raptor Lake) is no longer available for NPU.

You can find OpenVINO™ toolkit 2024.5 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@aku221b
@halm-zenger
@hibahassan1
@hub-bla
@jagadeeshmadinni
@nashez
@tianyiSKY1
@tiebreaker4869

Release documentation is available here: https://docs.openvino.ai/2024
Release Notes are available here: https://docs.openvino.ai/2024/about-openvino/release-notes-openvino.html

2024.4.0

19 Sep 12:25
c3152d3
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More Gen AI coverage and framework integrations to minimize code changes

    • Support for GLM-4-9B Chat, MiniCPM-1B, Llama 3 and 3.1, Phi-3-Mini, Phi-3-Medium and YOLOX-s models.
    • Noteworthy notebooks added: Florence-2, NuExtract-tiny Structure Extraction, Flux.1 Image Generation, PixArt-α: Photorealistic Text-to-Image Synthesis, and Phi-3-Vision Visual Language Assistant.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • OpenVINO™ runtime optimized for Intel® Xe Matrix Extensions (Intel® XMX) systolic arrays on built-in GPUs for efficient matrix multiplication resulting in significant LLM performance boost with improved 1st and 2nd token latency, as well as a smaller memory footprint on Intel® Core™ Ultra Processors (Series 2).
    • Memory sharing enabled for NPUs on Intel® Core™ Ultra Processors (Series 2) for efficient pipeline integration without memory copy overhead.
    • Addition of the PagedAttention feature for discrete GPUs* enables a significant boost in throughput for parallel inferencing when serving LLMs on Intel® Arc™ Graphics or Intel® Data Center GPU Flex Series.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • Support for Intel® Core Ultra Processors Series 2 (formerly codenamed Lunar Lake) on Windows.
    • OpenVINO™ Model Server now comes with production-quality support for OpenAI-compatible API which enables significantly higher throughput for parallel inferencing on Intel® Xeon® processors when serving LLMs to many concurrent users.
    • Improved performance and memory consumption with prefix caching, KV cache compression, and other optimizations for serving LLMs using OpenVINO™ Model Server.
    • Support for Python 3.12.
    • Support for Red Hat Enterprise Linux (RHEL) version 9

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
  • Deprecated and to be removed in the future:
    • The macOS x86_64 debug bins will no longer be provided with the OpenVINO toolkit, starting with OpenVINO 2024.5.
    • Python 3.8 is now considered deprecated, and it will not be available beyond the 2024.4 OpenVINO version.
    • dKMB support is now considered deprecated and will be fully removed with OpenVINO 2024.5
    • Intel® Streaming SIMD Extensions (Intel® SSE) will be supported in source code form, but not enabled in the binary package by default, starting with OpenVINO 2025.0
    • The openvino-nightly PyPI module will soon be discontinued. End-users should proceed with the Simple PyPI nightly repo instead. More information in Release Policy.
    • The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from installation options and distribution channels beginning with OpenVINO 2025.0.
    • Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the new conversion methods instead. For more details, see the model conversion transition guide.
    • OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).
    • OpenVINO Model Server components:
      • “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the future. OpenVINO’s dynamic shape models are recommended instead.
    • A number of notebooks have been deprecated. For an up-to-date listing of available notebooks, refer to the OpenVINO™ Notebook index (openvinotoolkit.github.io).

You can find OpenVINO™ toolkit 2024.4 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@hub-bla
@awayzjj
@jvr0123
@Pey-crypto
@nashez
@qxprakash

Release documentation is available here: https://docs.openvino.ai/2024
Release Notes are available here: https://docs.openvino.ai/2024/about-openvino/release-notes-openvino.html

2024.3.0

31 Jul 14:33
1e3b88e
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More Gen AI coverage and framework integrations to minimize code changes

    • OpenVINO pre-optimized models are now available in Hugging Face making it easier for developers to get started with these models.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • Significant improvement in LLM performance on Intel discrete GPUs with the addition of Multi-Head Attention (MHA) and OneDNN enhancements.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • Improved CPU performance when serving LLMs with the inclusion of vLLM and continuous batching in the OpenVINO Model Server (OVMS). vLLM is an easy-to-use open-source library that supports efficient LLM inferencing and model serving.
    • Ubuntu 24.04 long-term support (LTS), 64-bit (Kernel 6.8+) (preview support)

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
  • Deprecated and to be removed in the future:
    • The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from installation options and distribution channels beginning with OpenVINO 2025.0.
    • Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the new conversion methods instead. For more details, see the model conversion transition guide.
    • OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).
    • OpenVINO Model Server components:
      • “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the future. OpenVINO’s dynamic shape models are recommended instead.
    • A number of notebooks have been deprecated. For an up-to-date listing of available notebooks, refer to the OpenVINO™ Notebook index (openvinotoolkit.github.io).

You can find OpenVINO™ toolkit 2024.3 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@rghvsh
@PRATHAM-SPS
@duydl
@awayzjj
@jvr0123
@inbasperu
@DannyVlasenko
@amkarn258
@kcin96
@Vladislav-Denisov

Release documentation is available here: https://docs.openvino.ai/2024
Release Notes are available here: https://docs.openvino.ai/2024/about-openvino/release-notes-openvino.html

2024.2.0

17 Jun 17:21
5c0f38f
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More Gen AI coverage and framework integrations to minimize code changes

    • Llama 3 optimizations for CPUs, built-in GPUs, and discrete GPUs for improved performance and efficient memory usage.
    • Support for Phi-3-mini, a family of AI models that leverages the power of small language models for faster, more accurate and cost-effective text processing.
    • Python Custom Operation is now enabled in OpenVINO making it easier for Python developers to code their custom operations instead of using C++ custom operations (also supported). Python Custom Operation empowers users to implement their own specialized operations into any model.
    • Notebooks expansion to ensure better coverage for new models. Noteworthy notebooks added: DynamiCrafter, YOLOv10, Chatbot notebook with Phi-3, and QWEN2.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • GPTQ method for 4-bit weight compression added to NNCF for more efficient inference and improved performance of compressed LLMs.
    • Significant LLM performance improvements and reduced latency for both built-in GPUs and discrete GPUs.
    • Significant improvement in 2nd token latency and memory footprint of FP16 weight LLMs on AVX2 (13th Gen Intel® Core™ processors) and AVX512 (3rd Gen Intel® Xeon® Scalable Processors) based CPU platforms, particularly for small batch sizes.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • Model Serving Enhancements:
      • Preview: OpenVINO Model Server (OVMS) now supports OpenAI-compatible API along with Continuous Batching and PagedAttention, enabling significantly higher throughput for parallel inferencing, especially on Intel® Xeon® processors, when serving LLMs to many concurrent users.
      • OpenVINO backend for Triton Server now supports built-in GPUs and discrete GPUs, in addition to dynamic shapes support.
      • Integration of TorchServe through torch.compile OpenVINO backend for easy model deployment, provisioning to multiple instances, model versioning, and maintenance.
    • Preview: addition of the Generate API, a simplified API for text generation using large language models with only a few lines of code. The API is available through the newly launched OpenVINO GenAI package.
    • Support for Intel Atom® Processor X Series. For more details, see System Requirements.
    • Preview: Support for Intel® Xeon® 6 processor.

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
  • Deprecated and to be removed in the future:
    • The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from installation options and distribution channels beginning with OpenVINO 2025.0.
    • Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the new conversion methods instead. For more details, see the model conversion transition guide.
    • OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).
    • OpenVINO Model Server components:
      • “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the future. OpenVINO’s dynamic shape models are recommended instead.
    • A number of notebooks have been deprecated. For an up-to-date listing of available notebooks, refer to the OpenVINO™ Notebook index (openvinotoolkit.github.io).

You can find OpenVINO™ toolkit 2024.2 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@siddhant-0707
@adismort14
@LucaTamSapienza
@hongbo-wei
@awayzjj
@qxprakash
@keyonjie
@Huanli-Gong
@hegdeadithyak
@inbasperu
@Thodoris1999
@hongbo-wei
@himanshugupta11002
@tranchung163
@SANJITH-KUMAR-20
@anzr299
@Vladislav-Denisov

Release documentation is available here: https://docs.openvino.ai/2024
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2024-2.html

2022.3.2

06 May 15:14
e2c7e4d
Compare
Choose a tag to compare

Major Features and Improvements Summary

This is a Long-Term Support (LTS) release. LTS versions are released every year and supported for two years (one year for bug fixes, and two years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy  v.2 for more details.

  • This 2022.3.2 LTS release provides functional and security bug fixes for the previous 2022.3.1 Long-Term Support (LTS) release, enabling developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit more efficiently.
  • Intel® Movidius™ VPU-based products are supported in this release.

You can find OpenVINO™ toolkit 2022.3.2 release here:

Release documentation is available here: https://docs.openvino.ai/2022.3/

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-lts/2022-3.html

2024.1.0

25 Apr 14:45
f4afc98
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More Generative AI coverage and framework integrations to minimize code changes.

    • Mixtral and URLNet models optimized for performance improvements on Intel® Xeon® processors.
    • Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models optimized for improved inference speed on Intel® Core™ Ultra processors with integrated GPU.
    • Support for Falcon-7B-Instruct, a GenAI Large Language Model (LLM) ready-to-use chat/instruct model with superior performance metrics.
    • New Jupyter Notebooks added: YOLO V9, YOLO V8 Oriented Bounding Boxes Detection (OOB), Stable Diffusion in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika, TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with OpenVINO and LangChain.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • LLM compilation time reduced through additional optimizations with compressed embedding. Improved 1st token performance of LLMs on 4th and 5th generations of Intel® Xeon® processors with Intel® Advanced Matrix Extensions (Intel® AMX).
    • Better LLM compression and improved performance with oneDNN, INT4, and INT8 support for Intel® Arc™ GPUs.
    • Significant memory reduction for select smaller GenAI models on Intel® Core™ Ultra processors with integrated GPU.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • The preview NPU plugin for Intel® Core™ Ultra processors is now available in the OpenVINO open-source GitHub repository, in addition to the main OpenVINO package on PyPI.
    • The JavaScript API is now more easily accessible through the npm repository, enabling JavaScript developers’ seamless access to the OpenVINO API.
    • FP16 inference on ARM processors now enabled for the Convolutional Neural Network (CNN) by default.

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using Discontinued features, you will have to revert to the last LTS OpenVINO version supporting them.
    For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
  • Deprecated and to be removed in the future:
    • The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from installation options and distribution channels beginning with OpenVINO 2025.0.
    • Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the new conversion methods instead. For more details, see the model conversion transition guide.
    • OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).
    • OpenVINO Model Server components:
      • “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the future. OpenVINO’s dynamic shape models are recommended instead.

You can find OpenVINO™ toolkit 2024.1 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@LucaTamSapienza
@AsakusaRinne
@awayzjj
@MonalSD
@siddhant-0707
@qxprakash
@FredBill1
@Pranshu-S
@vshampor
@PRATHAM-SPS
@inbasperu
@linzs148
@chux0519
@ccinv
@Vishwa44
@rghvsh
@Aryan8912
@BHbean
@Vladislav-Denisov
@MeeCreeps
@YaritaiKoto
@Godwin-T
@mory91
@Bepitic
@akiseakusa
@kuanxian1
@himanshugupta11002
@mengbingrock

Release documentation is available here: https://docs.openvino.ai/2024
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2024-1.html

2024.0.0

06 Mar 14:21
34caeef
Compare
Choose a tag to compare

Summary of major features and improvements  

  • More Generative AI coverage and framework integrations to minimize code changes.

    • Improved out-of-the-box experience for TensorFlow* sentence encoding models through the installation of OpenVINO™ toolkit Tokenizers.
    • OpenVINO™ toolkit now supports Mixture of Experts (MoE), a new architecture that helps process more efficient generative models through the pipeline.
    • JavaScript developers now have seamless access to OpenVINO API. This new binding enables a smooth integration with JavaScript API.
    • New and noteworthy models validated: Mistral, StableLM-tuned-alpha-3b, and StableLM-Epoch-3B.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • Improved quality on INT4 weight compression for LLMs by adding the popular technique, Activation-aware Weight Quantization, to the Neural Network Compression Framework (NNCF). This addition reduces memory requirements and helps speed up token generation.
    • Experience enhanced LLM performance on Intel® CPUs, with internal memory state enhancement, and INT8 precision for KV-cache. Specifically tailored for multi-query LLMs like ChatGLM. ​
    • Easier optimization and conversion of Hugging Face models – compress LLM models to INT8 and INT4 with Hugging Face Optimum command line interface and export models to OpenVINO format. Note this is part of Optimum-Intel which needs to be installed separately.
    • The OpenVINO™ 2024.0 release makes it easier for developers, by integrating more OpenVINO™ features with the Hugging Face* ecosystem. Store quantization configurations for popular models directly in Hugging Face to compress models into INT4 format while preserving accuracy and performance.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • A preview plugin architecture of the integrated Neural Processor Unit (NPU) as part of Intel® Core™ Ultra processor is now included in the main OpenVINO™ package on PyPI.
    • Improved performance on ARM* by enabling the ARM threading library. In addition, we now support multi-core ARM platforms and enabled FP16 precision by default on MacOS*.
    • Improved performance on ARM platforms using throughput hint, which increases efficiency in utilization of CPU cores and memory bandwidth.​
    • New and improved LLM serving samples from OpenVINO™ Model Server for multi-batch inputs and Retrieval Augmented Generation (RAG).

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using Discontinued features, you will have to revert to the last LTS OpenVINO version supporting them.
    For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
    • Runtime components:
      • Intel® Gaussian & Neural Accelerator (Intel® GNA). Consider using the Neural Processing Unit (NPU) for low-powered systems like Intel® Core™ Ultra or 14th generation and beyond.
      • OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API transition guide for reference).
      • All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
      • 'PerfomanceMode.UNDEFINED' property as part of the OpenVINO Python API
    • Tools:
  • Deprecated and to be removed in the future:
    • The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from installation options and distribution channels beginning with OpenVINO 2025.0.
    • Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using OpenVINO Model Converter (API call: OVC) instead. Follow the model conversion transition guide for more details.
    • OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).
    • OpenVINO Model Server components:
      • Reshaping a model in runtime based on the incoming requests (auto shape and auto batch size) is deprecated and will be removed in the future. Using OpenVINO’s dynamic shape models is recommended instead.

You can find OpenVINO™ toolkit 2024.0 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@rghvsh
@YaritaiKoto
@Abdulrahman-Adel
@jvr0123
@sami0i
@guy-tamir
@rupeshs
@karanjakhar
@abhinav231-valisetti
@rajatkrishna
@lukazlim
@siddhant-0707
@tiger100256-hu

Release documentation is available here: https://docs.openvino.ai/2024
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2024-0.html