diff --git a/pages/manage_and_operate/api/account/guide.en-asia.md b/pages/manage_and_operate/api/account/guide.en-asia.md
index c8be1605498..f4b73798009 100644
--- a/pages/manage_and_operate/api/account/guide.en-asia.md
+++ b/pages/manage_and_operate/api/account/guide.en-asia.md
@@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
@@ -55,7 +55,7 @@ With the previously created ConsumerKey.
* email : add an email address for this user
* login : enter a relevant string for this user
-* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}.
+* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords).
## Go further
diff --git a/pages/manage_and_operate/api/account/guide.en-au.md b/pages/manage_and_operate/api/account/guide.en-au.md
index c8be1605498..f4b73798009 100644
--- a/pages/manage_and_operate/api/account/guide.en-au.md
+++ b/pages/manage_and_operate/api/account/guide.en-au.md
@@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
@@ -55,7 +55,7 @@ With the previously created ConsumerKey.
* email : add an email address for this user
* login : enter a relevant string for this user
-* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}.
+* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords).
## Go further
diff --git a/pages/manage_and_operate/api/account/guide.en-ca.md b/pages/manage_and_operate/api/account/guide.en-ca.md
index c8be1605498..f4b73798009 100644
--- a/pages/manage_and_operate/api/account/guide.en-ca.md
+++ b/pages/manage_and_operate/api/account/guide.en-ca.md
@@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
@@ -55,7 +55,7 @@ With the previously created ConsumerKey.
* email : add an email address for this user
* login : enter a relevant string for this user
-* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}.
+* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords).
## Go further
diff --git a/pages/manage_and_operate/api/account/guide.en-gb.md b/pages/manage_and_operate/api/account/guide.en-gb.md
index b90d1d78b6e..7a08d141f37 100644
--- a/pages/manage_and_operate/api/account/guide.en-gb.md
+++ b/pages/manage_and_operate/api/account/guide.en-gb.md
@@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
@@ -54,7 +54,7 @@ With the previously created ConsumerKey.
* email : add an email address for this user
* login : enter a relevant string for this user
-* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}.
+* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords).
## Go further
diff --git a/pages/manage_and_operate/api/account/guide.en-ie.md b/pages/manage_and_operate/api/account/guide.en-ie.md
index b90d1d78b6e..7a08d141f37 100644
--- a/pages/manage_and_operate/api/account/guide.en-ie.md
+++ b/pages/manage_and_operate/api/account/guide.en-ie.md
@@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
@@ -54,7 +54,7 @@ With the previously created ConsumerKey.
* email : add an email address for this user
* login : enter a relevant string for this user
-* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}.
+* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords).
## Go further
diff --git a/pages/manage_and_operate/api/account/guide.en-sg.md b/pages/manage_and_operate/api/account/guide.en-sg.md
index c8be1605498..f4b73798009 100644
--- a/pages/manage_and_operate/api/account/guide.en-sg.md
+++ b/pages/manage_and_operate/api/account/guide.en-sg.md
@@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
@@ -55,7 +55,7 @@ With the previously created ConsumerKey.
* email : add an email address for this user
* login : enter a relevant string for this user
-* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}.
+* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords).
## Go further
diff --git a/pages/manage_and_operate/api/account/guide.en-us.md b/pages/manage_and_operate/api/account/guide.en-us.md
index c8be1605498..f4b73798009 100644
--- a/pages/manage_and_operate/api/account/guide.en-us.md
+++ b/pages/manage_and_operate/api/account/guide.en-us.md
@@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
@@ -55,7 +55,7 @@ With the previously created ConsumerKey.
* email : add an email address for this user
* login : enter a relevant string for this user
-* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}.
+* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords).
## Go further
diff --git a/pages/manage_and_operate/api/account/guide.fr-ca.md b/pages/manage_and_operate/api/account/guide.fr-ca.md
index fb4d1c56e49..4acb4bbfacb 100644
--- a/pages/manage_and_operate/api/account/guide.fr-ca.md
+++ b/pages/manage_and_operate/api/account/guide.fr-ca.md
@@ -13,8 +13,8 @@ Ce guide vous permettra aussi d'ajouter un ou des logins a ce sous-compte pour l
## Prérequis
-* Être connecté aux [API OVHcloud](/links/api){.external}.
-* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}.
+* Être connecté aux [API OVHcloud](/links/api).
+* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps).
* Avoir un compte client avec un tag Reseller (Contactez votre commercial pour connaitre votre éligibilité le cas échéant).
## En pratique
@@ -55,7 +55,7 @@ Avec la ConsumerKey précédemment obtenue
* email : ajoutez une adresse mail pour cet utilisateur
* login : renseignez une chaîne de caractères
-* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external} .
+* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords) .
## Aller plus loin
diff --git a/pages/manage_and_operate/api/account/guide.fr-fr.md b/pages/manage_and_operate/api/account/guide.fr-fr.md
index fb4d1c56e49..4acb4bbfacb 100644
--- a/pages/manage_and_operate/api/account/guide.fr-fr.md
+++ b/pages/manage_and_operate/api/account/guide.fr-fr.md
@@ -13,8 +13,8 @@ Ce guide vous permettra aussi d'ajouter un ou des logins a ce sous-compte pour l
## Prérequis
-* Être connecté aux [API OVHcloud](/links/api){.external}.
-* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}.
+* Être connecté aux [API OVHcloud](/links/api).
+* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps).
* Avoir un compte client avec un tag Reseller (Contactez votre commercial pour connaitre votre éligibilité le cas échéant).
## En pratique
@@ -55,7 +55,7 @@ Avec la ConsumerKey précédemment obtenue
* email : ajoutez une adresse mail pour cet utilisateur
* login : renseignez une chaîne de caractères
-* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external} .
+* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords) .
## Aller plus loin
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md
index 4cef4d94ca8..ebd380b4848 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md
@@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you,
The first part, as the application developer, is to register your application on OVHcloud.
-To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}.
+To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/).
You will need to log in and set an application name and description.
@@ -111,5 +111,5 @@ Happy development !
## Go further
-- [API Console](/links/api){.external}
+- [API Console](/links/api)
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md
index 4cef4d94ca8..ebd380b4848 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md
@@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you,
The first part, as the application developer, is to register your application on OVHcloud.
-To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}.
+To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/).
You will need to log in and set an application name and description.
@@ -111,5 +111,5 @@ Happy development !
## Go further
-- [API Console](/links/api){.external}
+- [API Console](/links/api)
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md
index 4cef4d94ca8..ebd380b4848 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md
@@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you,
The first part, as the application developer, is to register your application on OVHcloud.
-To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}.
+To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/).
You will need to log in and set an application name and description.
@@ -111,5 +111,5 @@ Happy development !
## Go further
-- [API Console](/links/api){.external}
+- [API Console](/links/api)
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md
index 1872571a0f0..d30ebbc9433 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md
@@ -112,5 +112,5 @@ Happy development !
## Go further
-- [API Console](/links/api){.external}
+- [API Console](/links/api)
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md
index 1872571a0f0..d30ebbc9433 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md
@@ -112,5 +112,5 @@ Happy development !
## Go further
-- [API Console](/links/api){.external}
+- [API Console](/links/api)
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md
index 4cef4d94ca8..ebd380b4848 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md
@@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you,
The first part, as the application developer, is to register your application on OVHcloud.
-To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}.
+To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/).
You will need to log in and set an application name and description.
@@ -111,5 +111,5 @@ Happy development !
## Go further
-- [API Console](/links/api){.external}
+- [API Console](/links/api)
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md
index 4cef4d94ca8..ebd380b4848 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md
@@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you,
The first part, as the application developer, is to register your application on OVHcloud.
-To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}.
+To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/).
You will need to log in and set an application name and description.
@@ -111,5 +111,5 @@ Happy development !
## Go further
-- [API Console](/links/api){.external}
+- [API Console](/links/api)
diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md b/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md
index 276ffb58c8d..4297aab4572 100644
--- a/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md
+++ b/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md
@@ -19,7 +19,7 @@ Par exemple, supposons que vous voulez créer un marché dans lequel vous, en ta
La première partie, en tant que développeur d'applications, consiste à enregistrer votre application sur OVHcloud.
-Pour ce faire, accédez à l'[API OVHcloud](https://ca.api.ovh.com/createApp/){.external}
+Pour ce faire, accédez à l'[API OVHcloud](https://ca.api.ovh.com/createApp/)
Vous devrez vous connecter et définir un nom et une description de l'application.
diff --git a/pages/manage_and_operate/api/apiv2/guide.de-de.md b/pages/manage_and_operate/api/apiv2/guide.de-de.md
index 829c9a0fa28..acc8c9f963d 100644
--- a/pages/manage_and_operate/api/apiv2/guide.de-de.md
+++ b/pages/manage_and_operate/api/apiv2/guide.de-de.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.en-asia.md b/pages/manage_and_operate/api/apiv2/guide.en-asia.md
index a266ee53440..f6de8723804 100644
--- a/pages/manage_and_operate/api/apiv2/guide.en-asia.md
+++ b/pages/manage_and_operate/api/apiv2/guide.en-asia.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.en-au.md b/pages/manage_and_operate/api/apiv2/guide.en-au.md
index a266ee53440..f6de8723804 100644
--- a/pages/manage_and_operate/api/apiv2/guide.en-au.md
+++ b/pages/manage_and_operate/api/apiv2/guide.en-au.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.en-ca.md b/pages/manage_and_operate/api/apiv2/guide.en-ca.md
index a266ee53440..f6de8723804 100644
--- a/pages/manage_and_operate/api/apiv2/guide.en-ca.md
+++ b/pages/manage_and_operate/api/apiv2/guide.en-ca.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.en-gb.md b/pages/manage_and_operate/api/apiv2/guide.en-gb.md
index 829c9a0fa28..acc8c9f963d 100644
--- a/pages/manage_and_operate/api/apiv2/guide.en-gb.md
+++ b/pages/manage_and_operate/api/apiv2/guide.en-gb.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.en-ie.md b/pages/manage_and_operate/api/apiv2/guide.en-ie.md
index 829c9a0fa28..acc8c9f963d 100644
--- a/pages/manage_and_operate/api/apiv2/guide.en-ie.md
+++ b/pages/manage_and_operate/api/apiv2/guide.en-ie.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.en-sg.md b/pages/manage_and_operate/api/apiv2/guide.en-sg.md
index a266ee53440..f6de8723804 100644
--- a/pages/manage_and_operate/api/apiv2/guide.en-sg.md
+++ b/pages/manage_and_operate/api/apiv2/guide.en-sg.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.en-us.md b/pages/manage_and_operate/api/apiv2/guide.en-us.md
index a266ee53440..f6de8723804 100644
--- a/pages/manage_and_operate/api/apiv2/guide.en-us.md
+++ b/pages/manage_and_operate/api/apiv2/guide.en-us.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.es-es.md b/pages/manage_and_operate/api/apiv2/guide.es-es.md
index 829c9a0fa28..acc8c9f963d 100644
--- a/pages/manage_and_operate/api/apiv2/guide.es-es.md
+++ b/pages/manage_and_operate/api/apiv2/guide.es-es.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.es-us.md b/pages/manage_and_operate/api/apiv2/guide.es-us.md
index a266ee53440..f6de8723804 100644
--- a/pages/manage_and_operate/api/apiv2/guide.es-us.md
+++ b/pages/manage_and_operate/api/apiv2/guide.es-us.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.fr-ca.md b/pages/manage_and_operate/api/apiv2/guide.fr-ca.md
index 08cef835cd2..f5f9c030195 100644
--- a/pages/manage_and_operate/api/apiv2/guide.fr-ca.md
+++ b/pages/manage_and_operate/api/apiv2/guide.fr-ca.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objectif
-Les API disponibles sur [https://ca.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
+Les API disponibles sur [https://ca.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
Historiquement, les API d'OVHcloud sont disponibles sous la branche **/1.0** correspondant à la première version de l'API que nous avons publiée.
-Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}.
+Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2).
Cette nouvelle branche regroupera des nouvelles routes d'API, retravaillées sous un nouveau format, et deviendra la branche d'API principale pour les nouveaux développements de fonctionnalités de produits OVHcloud.
La branche **/1.0** continuera d'exister en parallèle de la branche **/v2** mais ne contiendra pas la même fonctionnalité. En tant que client, vous pourrez consommer des API de la branche **/1.0** et **/v2** simultanément dans vos programmes, tout en conservant la même authentification et les mêmes outils pour appeler l'API. Afin de standardiser le nommage de nos branches d'API, la branche **/1.0** est également disponible à travers l'alias **/v1**.
@@ -51,7 +51,7 @@ Lors de la sortie d'une nouvelle version majeure, nous ferons une évaluation de
#### Récupérer les versions disponibles via la console
-Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
Les différentes versions sont affichées dans la section **SCHEMAS VERSION**. Vous pouvez ensuite sélectionner une version pour voir les schémas d'API associés.
@@ -64,9 +64,9 @@ Deux approches opposées existent pour voir l'état courant d'une ressource à t
- **Approche centrée sur le processus** : l'API expose l'état courant des ressources (par exemple une instance Public Cloud) et offre des opérations pour les modifier (par exemple, changer la taille d'un disque).
- **Approche centrée sur les ressources** : l'API expose à la fois l'état courant des ressources ainsi que l'état souhaité. Les modifications se font directement en mettant à jour l'état souhaité des ressources. Dans ce cas, l'API effectue elle-même les actions nécessaires pour atteindre l'état ciblé.
-La première approche est celle utilisée par l'API actuelle : [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}.
+La première approche est celle utilisée par l'API actuelle : [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1).
-L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io){.external}. Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client.
+L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io). Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client.
### Gestion asynchrone et évènements
@@ -154,9 +154,9 @@ L'absence de l'en-tête `X-Pagination-Cursor-Next` dans une réponse d'API conte
Plusieurs bibliothèques sont disponibles pour utiliser les API OVHcloud :
-- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Aller plus loin
diff --git a/pages/manage_and_operate/api/apiv2/guide.fr-fr.md b/pages/manage_and_operate/api/apiv2/guide.fr-fr.md
index 9ed8db541c5..c1019992051 100644
--- a/pages/manage_and_operate/api/apiv2/guide.fr-fr.md
+++ b/pages/manage_and_operate/api/apiv2/guide.fr-fr.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objectif
-Les API disponibles sur [https://eu.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
+Les API disponibles sur [https://eu.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
Historiquement, les API d'OVHcloud sont disponibles sous la branche **/1.0** correspondant à la première version de l'API que nous avons publiée.
-Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
Cette nouvelle branche regroupera des nouvelles routes d'API, retravaillées sous un nouveau format, et deviendra la branche d'API principale pour les nouveaux développements de fonctionnalités de produits OVHcloud.
La branche **/1.0** continuera d'exister en parallèle de la branche **/v2** mais ne contiendra pas la même fonctionnalité. En tant que client, vous pourrez consommer des API de la branche **/1.0** et **/v2** simultanément dans vos programmes, tout en conservant la même authentification et les mêmes outils pour appeler l'API. Afin de standardiser le nommage de nos branches d'API, la branche **/1.0** est également disponible à travers l'alias **/v1**.
@@ -51,7 +51,7 @@ Lors de la sortie d'une nouvelle version majeure, nous ferons une évaluation de
#### Récupérer les versions disponibles via la console
-Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
Les différentes versions sont affichées dans la section **SCHEMAS VERSION**. Vous pouvez ensuite sélectionner une version pour voir les schémas d'API associés.
@@ -64,9 +64,9 @@ Deux approches opposées existent pour voir l'état courant d'une ressource à t
- **Approche centrée sur le processus** : l'API expose l'état courant des ressources (par exemple une instance Public Cloud) et offre des opérations pour les modifier (par exemple, changer la taille d'un disque).
- **Approche centrée sur les ressources** : l'API expose à la fois l'état courant des ressources ainsi que l'état souhaité. Les modifications se font directement en mettant à jour l'état souhaité des ressources. Dans ce cas, l'API effectue elle-même les actions nécessaires pour atteindre l'état ciblé.
-La première approche est celle utilisée par l'API actuelle : [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+La première approche est celle utilisée par l'API actuelle : [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io){.external}. Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client.
+L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io). Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client.
### Gestion asynchrone et évènements
@@ -154,9 +154,9 @@ L'absence de l'en-tête `X-Pagination-Cursor-Next` dans une réponse d'API conte
Plusieurs bibliothèques sont disponibles pour utiliser les API OVHcloud :
-- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Aller plus loin
diff --git a/pages/manage_and_operate/api/apiv2/guide.it-it.md b/pages/manage_and_operate/api/apiv2/guide.it-it.md
index 829c9a0fa28..acc8c9f963d 100644
--- a/pages/manage_and_operate/api/apiv2/guide.it-it.md
+++ b/pages/manage_and_operate/api/apiv2/guide.it-it.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.pl-pl.md b/pages/manage_and_operate/api/apiv2/guide.pl-pl.md
index 829c9a0fa28..acc8c9f963d 100644
--- a/pages/manage_and_operate/api/apiv2/guide.pl-pl.md
+++ b/pages/manage_and_operate/api/apiv2/guide.pl-pl.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/apiv2/guide.pt-pt.md b/pages/manage_and_operate/api/apiv2/guide.pt-pt.md
index 829c9a0fa28..acc8c9f963d 100644
--- a/pages/manage_and_operate/api/apiv2/guide.pt-pt.md
+++ b/pages/manage_and_operate/api/apiv2/guide.pt-pt.md
@@ -6,11 +6,11 @@ updated: 2023-04-17
## Objective
-The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
+The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel.
Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published.
-A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}.
+A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2).
This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias.
@@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve
#### Retrieve available versions via the console
-You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}.
+You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers).
The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas.
@@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro
- **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk).
- **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state.
-The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}.
+The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1).
-The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
+The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer.
### Asynchronous management and events
@@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi
Several libraries are available to use the OVHcloud APIs:
-- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external}
-- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external}
-- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external}
+- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh)
+- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh)
+- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh)
## Go further
diff --git a/pages/manage_and_operate/api/console-preview/guide.de-de.md b/pages/manage_and_operate/api/console-preview/guide.de-de.md
index f5efaf817e6..285f64db3bc 100644
--- a/pages/manage_and_operate/api/console-preview/guide.de-de.md
+++ b/pages/manage_and_operate/api/console-preview/guide.de-de.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.en-asia.md b/pages/manage_and_operate/api/console-preview/guide.en-asia.md
index c88d58b3f58..ce0f74fdab8 100644
--- a/pages/manage_and_operate/api/console-preview/guide.en-asia.md
+++ b/pages/manage_and_operate/api/console-preview/guide.en-asia.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.en-au.md b/pages/manage_and_operate/api/console-preview/guide.en-au.md
index c88d58b3f58..ce0f74fdab8 100644
--- a/pages/manage_and_operate/api/console-preview/guide.en-au.md
+++ b/pages/manage_and_operate/api/console-preview/guide.en-au.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.en-ca.md b/pages/manage_and_operate/api/console-preview/guide.en-ca.md
index c88d58b3f58..ce0f74fdab8 100644
--- a/pages/manage_and_operate/api/console-preview/guide.en-ca.md
+++ b/pages/manage_and_operate/api/console-preview/guide.en-ca.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.en-gb.md b/pages/manage_and_operate/api/console-preview/guide.en-gb.md
index 87c4a854c39..09c89332005 100644
--- a/pages/manage_and_operate/api/console-preview/guide.en-gb.md
+++ b/pages/manage_and_operate/api/console-preview/guide.en-gb.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.en-ie.md b/pages/manage_and_operate/api/console-preview/guide.en-ie.md
index 87c4a854c39..09c89332005 100644
--- a/pages/manage_and_operate/api/console-preview/guide.en-ie.md
+++ b/pages/manage_and_operate/api/console-preview/guide.en-ie.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.en-sg.md b/pages/manage_and_operate/api/console-preview/guide.en-sg.md
index c88d58b3f58..ce0f74fdab8 100644
--- a/pages/manage_and_operate/api/console-preview/guide.en-sg.md
+++ b/pages/manage_and_operate/api/console-preview/guide.en-sg.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.en-us.md b/pages/manage_and_operate/api/console-preview/guide.en-us.md
index c88d58b3f58..ce0f74fdab8 100644
--- a/pages/manage_and_operate/api/console-preview/guide.en-us.md
+++ b/pages/manage_and_operate/api/console-preview/guide.en-us.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.es-es.md b/pages/manage_and_operate/api/console-preview/guide.es-es.md
index f5efaf817e6..285f64db3bc 100644
--- a/pages/manage_and_operate/api/console-preview/guide.es-es.md
+++ b/pages/manage_and_operate/api/console-preview/guide.es-es.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.es-us.md b/pages/manage_and_operate/api/console-preview/guide.es-us.md
index 2347a5e1e50..e255f253982 100644
--- a/pages/manage_and_operate/api/console-preview/guide.es-us.md
+++ b/pages/manage_and_operate/api/console-preview/guide.es-us.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.fr-ca.md b/pages/manage_and_operate/api/console-preview/guide.fr-ca.md
index 07ebf8abbc8..95b05494a20 100644
--- a/pages/manage_and_operate/api/console-preview/guide.fr-ca.md
+++ b/pages/manage_and_operate/api/console-preview/guide.fr-ca.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objectif
-Les API disponibles sur [https://ca.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
+Les API disponibles sur [https://ca.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
**Découvrez comment explorer les API OVHcloud à travers notre nouvelle console.**
## Prérequis
- Disposer d'un compte OVHcloud actif et connaître ses identifiants.
-- Être sur la page web des [API OVHcloud](/links/api){.external}.
+- Être sur la page web des [API OVHcloud](/links/api).
## En pratique
diff --git a/pages/manage_and_operate/api/console-preview/guide.fr-fr.md b/pages/manage_and_operate/api/console-preview/guide.fr-fr.md
index f18afa27916..826222d6fb6 100644
--- a/pages/manage_and_operate/api/console-preview/guide.fr-fr.md
+++ b/pages/manage_and_operate/api/console-preview/guide.fr-fr.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objectif
-Les API disponibles sur [https://eu.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
+Les API disponibles sur [https://eu.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
**Découvrez comment explorer les API OVHcloud à travers notre nouvelle console.**
## Prérequis
- Disposer d'un compte OVHcloud actif et connaître ses identifiants.
-- Être sur la page web des [API OVHcloud](/links/api){.external}.
+- Être sur la page web des [API OVHcloud](/links/api).
## En pratique
diff --git a/pages/manage_and_operate/api/console-preview/guide.it-it.md b/pages/manage_and_operate/api/console-preview/guide.it-it.md
index f5efaf817e6..285f64db3bc 100644
--- a/pages/manage_and_operate/api/console-preview/guide.it-it.md
+++ b/pages/manage_and_operate/api/console-preview/guide.it-it.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.pl-pl.md b/pages/manage_and_operate/api/console-preview/guide.pl-pl.md
index f5efaf817e6..285f64db3bc 100644
--- a/pages/manage_and_operate/api/console-preview/guide.pl-pl.md
+++ b/pages/manage_and_operate/api/console-preview/guide.pl-pl.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/console-preview/guide.pt-pt.md b/pages/manage_and_operate/api/console-preview/guide.pt-pt.md
index f5efaf817e6..285f64db3bc 100644
--- a/pages/manage_and_operate/api/console-preview/guide.pt-pt.md
+++ b/pages/manage_and_operate/api/console-preview/guide.pt-pt.md
@@ -6,14 +6,14 @@ updated: 2023-03-27
## Objective
-The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
+The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel.
**Discover how to explore the OVHcloud APIs on our brand new console**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md b/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md
index 2b7b377ca9a..e6545b4d4f1 100644
--- a/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md
+++ b/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md
@@ -10,10 +10,10 @@ We will describe part of your payment and billing cycle at OVHcloud.
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
-* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account){.external}.
+* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account).
* Having at least the Business or Enterprise Support level.
## Instructions
diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md b/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md
index 2b7b377ca9a..e6545b4d4f1 100644
--- a/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md
+++ b/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md
@@ -10,10 +10,10 @@ We will describe part of your payment and billing cycle at OVHcloud.
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
-* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account){.external}.
+* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account).
* Having at least the Business or Enterprise Support level.
## Instructions
diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md b/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md
index 2b7b377ca9a..e6545b4d4f1 100644
--- a/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md
+++ b/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md
@@ -10,10 +10,10 @@ We will describe part of your payment and billing cycle at OVHcloud.
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
-* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account){.external}.
+* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account).
* Having at least the Business or Enterprise Support level.
## Instructions
diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md b/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md
index 9fefedc124f..10fb32a671f 100644
--- a/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md
+++ b/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md
@@ -10,9 +10,9 @@ Nous allons décrire une partie du cycle de gestion de votre paiement et de votr
## Prérequis
-* Être connecté aux [API OVHcloud](/links/api){.external}.
-* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}.
-* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account){.external}.
+* Être connecté aux [API OVHcloud](/links/api).
+* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps).
+* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account).
* Avoir a minima le niveau de support de type Business ou Enterprise.
## En pratique
diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md b/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md
index 9fefedc124f..10fb32a671f 100644
--- a/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md
+++ b/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md
@@ -10,9 +10,9 @@ Nous allons décrire une partie du cycle de gestion de votre paiement et de votr
## Prérequis
-* Être connecté aux [API OVHcloud](/links/api){.external}.
-* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}.
-* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account){.external}.
+* Être connecté aux [API OVHcloud](/links/api).
+* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps).
+* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account).
* Avoir a minima le niveau de support de type Business ou Enterprise.
## En pratique
diff --git a/pages/manage_and_operate/api/first-steps/guide.de-de.md b/pages/manage_and_operate/api/first-steps/guide.de-de.md
index ed5a4dc4db5..c434155e59c 100644
--- a/pages/manage_and_operate/api/first-steps/guide.de-de.md
+++ b/pages/manage_and_operate/api/first-steps/guide.de-de.md
@@ -10,14 +10,14 @@ updated: 2025-05-13
## Ziel
-Die unter [https://api.ovh.com/](/links/api){.external} verfügbare API erlaubt es Ihnen, OVHcloud Produkte zu bestellen, zu verwalten, zu aktualisieren und zu konfigurieren, ohne ein grafisches Interface wie das Kundencenter zu verwenden.
+Die unter [https://api.ovh.com/](/links/api) verfügbare API erlaubt es Ihnen, OVHcloud Produkte zu bestellen, zu verwalten, zu aktualisieren und zu konfigurieren, ohne ein grafisches Interface wie das Kundencenter zu verwenden.
**Hier erfahren Sie, wie Sie die OVHcloud API verwenden und mit Ihren Anwendungen verbinden.**
## Voraussetzungen
- Sie verfügen über einen aktiven OVHcloud Kunden-Account und dessen Zugangsdaten.
-- Sie sind auf der Webseite der [OVHcloud API](/links/api){.external}.
+- Sie sind auf der Webseite der [OVHcloud API](/links/api).
## In der praktischen Anwendung
@@ -131,7 +131,7 @@ Die Tabs `PHP` und `Python` enthalten die Elemente, die entsprechend der Anwendu
Jede Anwendung, die mit der OVHcloud API kommunizieren möchte, muss zuerst freigegeben werden.
-Klicken Sie hierzu auf folgenden Link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+Klicken Sie hierzu auf folgenden Link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Geben Sie Ihre Kundenkennung, Ihr Passwort und den Namen Ihrer Anwendung ein. Der Name kann nützlich sein, um anderen Personen Zugriff zu gewähren.
diff --git a/pages/manage_and_operate/api/first-steps/guide.en-asia.md b/pages/manage_and_operate/api/first-steps/guide.en-asia.md
index e72f3a24aa3..e27c18eb042 100644
--- a/pages/manage_and_operate/api/first-steps/guide.en-asia.md
+++ b/pages/manage_and_operate/api/first-steps/guide.en-asia.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
**Learn how to use OVHcloud APIs and how to pair them with your applications.**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
@@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco
Any application that wants to communicate with the OVHcloud API must be declared in advance.
-To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}.
+To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/).
Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it.
diff --git a/pages/manage_and_operate/api/first-steps/guide.en-au.md b/pages/manage_and_operate/api/first-steps/guide.en-au.md
index e72f3a24aa3..e27c18eb042 100644
--- a/pages/manage_and_operate/api/first-steps/guide.en-au.md
+++ b/pages/manage_and_operate/api/first-steps/guide.en-au.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
**Learn how to use OVHcloud APIs and how to pair them with your applications.**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
@@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco
Any application that wants to communicate with the OVHcloud API must be declared in advance.
-To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}.
+To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/).
Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it.
diff --git a/pages/manage_and_operate/api/first-steps/guide.en-ca.md b/pages/manage_and_operate/api/first-steps/guide.en-ca.md
index e72f3a24aa3..e27c18eb042 100644
--- a/pages/manage_and_operate/api/first-steps/guide.en-ca.md
+++ b/pages/manage_and_operate/api/first-steps/guide.en-ca.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
**Learn how to use OVHcloud APIs and how to pair them with your applications.**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
@@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco
Any application that wants to communicate with the OVHcloud API must be declared in advance.
-To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}.
+To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/).
Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it.
diff --git a/pages/manage_and_operate/api/first-steps/guide.en-gb.md b/pages/manage_and_operate/api/first-steps/guide.en-gb.md
index a587b2ada91..2626fc923fc 100644
--- a/pages/manage_and_operate/api/first-steps/guide.en-gb.md
+++ b/pages/manage_and_operate/api/first-steps/guide.en-gb.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objective
-The APIs available on [https://api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
+The APIs available on [https://api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
**Learn how to use OVHcloud APIs and how to pair them with your applications.**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
@@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco
Any application that wants to communicate with the OVHcloud API must be declared in advance.
-To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it.
diff --git a/pages/manage_and_operate/api/first-steps/guide.en-ie.md b/pages/manage_and_operate/api/first-steps/guide.en-ie.md
index a587b2ada91..2626fc923fc 100644
--- a/pages/manage_and_operate/api/first-steps/guide.en-ie.md
+++ b/pages/manage_and_operate/api/first-steps/guide.en-ie.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objective
-The APIs available on [https://api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
+The APIs available on [https://api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
**Learn how to use OVHcloud APIs and how to pair them with your applications.**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
@@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco
Any application that wants to communicate with the OVHcloud API must be declared in advance.
-To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it.
diff --git a/pages/manage_and_operate/api/first-steps/guide.en-sg.md b/pages/manage_and_operate/api/first-steps/guide.en-sg.md
index e72f3a24aa3..e27c18eb042 100644
--- a/pages/manage_and_operate/api/first-steps/guide.en-sg.md
+++ b/pages/manage_and_operate/api/first-steps/guide.en-sg.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
**Learn how to use OVHcloud APIs and how to pair them with your applications.**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
@@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco
Any application that wants to communicate with the OVHcloud API must be declared in advance.
-To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}.
+To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/).
Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it.
diff --git a/pages/manage_and_operate/api/first-steps/guide.en-us.md b/pages/manage_and_operate/api/first-steps/guide.en-us.md
index e72f3a24aa3..e27c18eb042 100644
--- a/pages/manage_and_operate/api/first-steps/guide.en-us.md
+++ b/pages/manage_and_operate/api/first-steps/guide.en-us.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objective
-The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
+The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel.
**Learn how to use OVHcloud APIs and how to pair them with your applications.**
## Requirements
- You have an active OVHcloud account and know its credentials.
-- You are on the [OVHcloud API](/links/api){.external} web page.
+- You are on the [OVHcloud API](/links/api) web page.
## Instructions
@@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco
Any application that wants to communicate with the OVHcloud API must be declared in advance.
-To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}.
+To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/).
Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it.
diff --git a/pages/manage_and_operate/api/first-steps/guide.es-es.md b/pages/manage_and_operate/api/first-steps/guide.es-es.md
index 531803ea1fb..a7d4ecf8375 100644
--- a/pages/manage_and_operate/api/first-steps/guide.es-es.md
+++ b/pages/manage_and_operate/api/first-steps/guide.es-es.md
@@ -10,14 +10,14 @@ updated: 2025-05-13
## Objetivo
-Las API disponibles en [https://api.ovh.com/](/links/api){.external} le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente.
+Las API disponibles en [https://api.ovh.com/](/links/api) le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente.
**Cómo utilizar las API de OVHcloud y cómo asociarlas a sus aplicaciones**
## Requisitos
- Disponer de una cuenta de OVHcloud activa y conocer sus claves de acceso.
-- Estar en la página web de las [API de OVHcloud](/links/api){.external}.
+- Estar en la página web de las [API de OVHcloud](/links/api).
## Procedimiento
@@ -132,7 +132,7 @@ Las pestañas `PHP` y `Python` contienen los elementos que se añadirán al scri
Todas las aplicaciones que quieran comunicarse con la API de OVHcloud deben notificarse con antelación.
-Para ello, haga clic en el siguiente enlace: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+Para ello, haga clic en el siguiente enlace: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Introduzca su identificador de cliente, su contraseña y el nombre de su aplicación. El nombre será útil más adelante si desea permitir que otras personas lo usen.
diff --git a/pages/manage_and_operate/api/first-steps/guide.es-us.md b/pages/manage_and_operate/api/first-steps/guide.es-us.md
index 9f40d0f9ee8..9b51a0a9999 100644
--- a/pages/manage_and_operate/api/first-steps/guide.es-us.md
+++ b/pages/manage_and_operate/api/first-steps/guide.es-us.md
@@ -10,14 +10,14 @@ updated: 2025-05-13
## Objetivo
-Las API disponibles en [https://ca.api.ovh.com/](/links/api){.external} le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente.
+Las API disponibles en [https://ca.api.ovh.com/](/links/api) le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente.
**Cómo utilizar las API de OVHcloud y cómo asociarlas a sus aplicaciones**
## Requisitos
- Disponer de una cuenta de OVHcloud activa y conocer sus claves de acceso.
-- Estar en la página web de las [API de OVHcloud](/links/api){.external}.
+- Estar en la página web de las [API de OVHcloud](/links/api).
## Procedimiento
@@ -132,7 +132,7 @@ Las pestañas `PHP` y `Python` contienen los elementos que se añadirán al scri
Todas las aplicaciones que quieran comunicarse con la API de OVHcloud deben notificarse con antelación.
-Para ello, haga clic en el siguiente enlace: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}.
+Para ello, haga clic en el siguiente enlace: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/).
Introduzca su identificador de cliente, su contraseña y el nombre de su aplicación. El nombre será útil más adelante si desea permitir que otras personas lo usen.
diff --git a/pages/manage_and_operate/api/first-steps/guide.fr-ca.md b/pages/manage_and_operate/api/first-steps/guide.fr-ca.md
index 78b3d7a8a96..c2ad96c1e3d 100644
--- a/pages/manage_and_operate/api/first-steps/guide.fr-ca.md
+++ b/pages/manage_and_operate/api/first-steps/guide.fr-ca.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objectif
-Les API disponibles sur [https://ca.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
+Les API disponibles sur [https://ca.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
**Découvrez comment utiliser les API OVHcloud mais aussi comment les coupler avec vos applications**
## Prérequis
- Disposer d'un compte OVHcloud actif et connaître ses identifiants.
-- Être sur la page web des [API OVHcloud](/links/api){.external}.
+- Être sur la page web des [API OVHcloud](/links/api).
## En pratique
@@ -128,7 +128,7 @@ Les onglets `PHP` et `Python` contiennent les éléments à ajouter dans votre s
Toute application souhaitant communiquer avec l'API OVHcloud doit être déclarée à l'avance.
-Pour ce faire, cliquez sur le lien suivant : [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}.
+Pour ce faire, cliquez sur le lien suivant : [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/).
Renseignez votre identifiant client, votre mot de passe et le nom de votre application. Le nom sera utile plus tard si vous voulez autoriser d'autres personnes à l'utiliser.
diff --git a/pages/manage_and_operate/api/first-steps/guide.fr-fr.md b/pages/manage_and_operate/api/first-steps/guide.fr-fr.md
index d999b3105b2..0d33034f907 100644
--- a/pages/manage_and_operate/api/first-steps/guide.fr-fr.md
+++ b/pages/manage_and_operate/api/first-steps/guide.fr-fr.md
@@ -6,14 +6,14 @@ updated: 2025-05-13
## Objectif
-Les API disponibles sur [https://api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
+Les API disponibles sur [https://api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client.
**Découvrez comment utiliser les API OVHcloud mais aussi comment les coupler avec vos applications**
## Prérequis
- Disposer d'un compte OVHcloud actif et connaître ses identifiants.
-- Être sur la page web des [API OVHcloud](/links/api){.external}.
+- Être sur la page web des [API OVHcloud](/links/api).
## En pratique
@@ -128,7 +128,7 @@ Les onglets `PHP` et `Python` contiennent les éléments à ajouter dans votre s
Toute application souhaitant communiquer avec l'API OVHcloud doit être déclarée à l'avance.
-Pour ce faire, cliquez sur le lien suivant : [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+Pour ce faire, cliquez sur le lien suivant : [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Renseignez votre identifiant client, votre mot de passe et le nom de votre application. Le nom sera utile plus tard si vous voulez autoriser d'autres personnes à l'utiliser.
diff --git a/pages/manage_and_operate/api/first-steps/guide.it-it.md b/pages/manage_and_operate/api/first-steps/guide.it-it.md
index a0fe1365197..24a2412944e 100644
--- a/pages/manage_and_operate/api/first-steps/guide.it-it.md
+++ b/pages/manage_and_operate/api/first-steps/guide.it-it.md
@@ -10,14 +10,14 @@ updated: 2025-05-13
## Obiettivo
-Le API disponibili su [https://api.ovh.com/](/links/api){.external} ti permettono di acquistare, gestire, aggiornare e configurare prodotti OVHcloud senza utilizzare un'interfaccia grafica come lo Spazio Cliente.
+Le API disponibili su [https://api.ovh.com/](/links/api) ti permettono di acquistare, gestire, aggiornare e configurare prodotti OVHcloud senza utilizzare un'interfaccia grafica come lo Spazio Cliente.
**Scopri come utilizzare le API OVHcloud e come associarle alle tue applicazioni**
## Prerequisiti
- Disporre di un account OVHcloud attivo e conoscere le proprie credenziali
-- Essere sulla pagina Web delle [API OVHcloud](/links/api){.external}.
+- Essere sulla pagina Web delle [API OVHcloud](/links/api).
## Procedura
@@ -132,7 +132,7 @@ Le schede `PHP` e `Python` contengono gli elementi da aggiungere al tuo script i
Qualsiasi applicazione che desideri comunicare con l'API OVHcloud deve essere dichiarata in anticipo.
-Clicca su questo link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+Clicca su questo link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Inserisci il tuo identificativo cliente, la password e il nome della tua applicazione. Il nome sarà utile più tardi se volete autorizzare altre persone a usarlo.
diff --git a/pages/manage_and_operate/api/first-steps/guide.pl-pl.md b/pages/manage_and_operate/api/first-steps/guide.pl-pl.md
index 32edc512298..a594673af4f 100644
--- a/pages/manage_and_operate/api/first-steps/guide.pl-pl.md
+++ b/pages/manage_and_operate/api/first-steps/guide.pl-pl.md
@@ -10,14 +10,14 @@ updated: 2025-05-13
## Wprowadzenie
-API dostępne na stronie [https://api.ovh.com/](/links/api){.external} pozwalają na zakup, zarządzanie i konfigurowanie produktów OVHcloud bez konieczności korzystania z interfejsu graficznego, takiego jak Panel klienta.
+API dostępne na stronie [https://api.ovh.com/](/links/api) pozwalają na zakup, zarządzanie i konfigurowanie produktów OVHcloud bez konieczności korzystania z interfejsu graficznego, takiego jak Panel klienta.
**Dowiedz się, jak korzystać z API OVHcloud oraz jak je łączyć z Twoimi aplikacjami**
## Wymagania początkowe
- Posiadanie aktywnego konta OVHcloud i znanie jego identyfikatorów
-- Bycie na stronie WWW [API OVHcloud](/links/api){.external}.
+- Bycie na stronie WWW [API OVHcloud](/links/api).
## W praktyce
@@ -132,7 +132,7 @@ Zakładki `PHP` i `Python` zawierają elementy, które należy dodać do skryptu
Każda aplikacja, która chce komunikować się z API OVHcloud, musi zostać zgłoszona z wyprzedzeniem.
-W tym celu kliknij link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+W tym celu kliknij link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Wpisz identyfikator klienta, hasło i nazwę aplikacji. Nazwa będzie pomocna później, jeśli chcesz zezwolić innym na jej używanie.
diff --git a/pages/manage_and_operate/api/first-steps/guide.pt-pt.md b/pages/manage_and_operate/api/first-steps/guide.pt-pt.md
index 91bfed784e6..16102d438a6 100644
--- a/pages/manage_and_operate/api/first-steps/guide.pt-pt.md
+++ b/pages/manage_and_operate/api/first-steps/guide.pt-pt.md
@@ -10,14 +10,14 @@ updated: 2025-05-13
## Objetivo
-As API disponíveis em [https://api.ovh.com/](/links/api){.external} permitem-lhe adquirir, gerir, atualizar e configurar produtos OVHcloud sem utilizar uma interface gráfica como a Área de Cliente.
+As API disponíveis em [https://api.ovh.com/](/links/api) permitem-lhe adquirir, gerir, atualizar e configurar produtos OVHcloud sem utilizar uma interface gráfica como a Área de Cliente.
**Saiba como utilizar as API da OVHcloud e como associá-las às suas aplicações**
## Requisitos
- Ter uma conta OVHcloud ativa e conhecer os seus identificadores.
-- Estar na página web das [API OVHcloud](/links/api){.external}.
+- Estar na página web das [API OVHcloud](/links/api).
## Instruções
@@ -132,7 +132,7 @@ Os separadores `PHP` e `Python` contêm os elementos que devem ser adicionados n
Qualquer aplicação que pretenda comunicar com a API da OVHcloud deve ser declarada previamente.
-Para isso, clique na seguinte ligação: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}.
+Para isso, clique na seguinte ligação: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/).
Indique o seu ID de cliente, a sua palavra-passe e o nome da sua aplicação. O nome será útil mais tarde se quiser autorizar outras pessoas a utilizá-lo.
diff --git a/pages/manage_and_operate/api/services/guide.en-asia.md b/pages/manage_and_operate/api/services/guide.en-asia.md
index 0049144448f..3ad72ba0918 100644
--- a/pages/manage_and_operate/api/services/guide.en-asia.md
+++ b/pages/manage_and_operate/api/services/guide.en-asia.md
@@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
diff --git a/pages/manage_and_operate/api/services/guide.en-au.md b/pages/manage_and_operate/api/services/guide.en-au.md
index 0049144448f..3ad72ba0918 100644
--- a/pages/manage_and_operate/api/services/guide.en-au.md
+++ b/pages/manage_and_operate/api/services/guide.en-au.md
@@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
diff --git a/pages/manage_and_operate/api/services/guide.en-ca.md b/pages/manage_and_operate/api/services/guide.en-ca.md
index 0049144448f..3ad72ba0918 100644
--- a/pages/manage_and_operate/api/services/guide.en-ca.md
+++ b/pages/manage_and_operate/api/services/guide.en-ca.md
@@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
diff --git a/pages/manage_and_operate/api/services/guide.en-gb.md b/pages/manage_and_operate/api/services/guide.en-gb.md
index 0049144448f..3ad72ba0918 100644
--- a/pages/manage_and_operate/api/services/guide.en-gb.md
+++ b/pages/manage_and_operate/api/services/guide.en-gb.md
@@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
diff --git a/pages/manage_and_operate/api/services/guide.en-ie.md b/pages/manage_and_operate/api/services/guide.en-ie.md
index 0049144448f..3ad72ba0918 100644
--- a/pages/manage_and_operate/api/services/guide.en-ie.md
+++ b/pages/manage_and_operate/api/services/guide.en-ie.md
@@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
diff --git a/pages/manage_and_operate/api/services/guide.en-sg.md b/pages/manage_and_operate/api/services/guide.en-sg.md
index 0049144448f..3ad72ba0918 100644
--- a/pages/manage_and_operate/api/services/guide.en-sg.md
+++ b/pages/manage_and_operate/api/services/guide.en-sg.md
@@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
diff --git a/pages/manage_and_operate/api/services/guide.en-us.md b/pages/manage_and_operate/api/services/guide.en-us.md
index 0049144448f..3ad72ba0918 100644
--- a/pages/manage_and_operate/api/services/guide.en-us.md
+++ b/pages/manage_and_operate/api/services/guide.en-us.md
@@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a
## Requirements
-* Being connected on [OVHcloud API](/links/api){.external}.
-* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}.
+* Being connected on [OVHcloud API](/links/api).
+* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps).
* Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable).
## Instructions
diff --git a/pages/manage_and_operate/api/services/guide.fr-ca.md b/pages/manage_and_operate/api/services/guide.fr-ca.md
index 9f7fae9a4bc..dfdbab3741e 100644
--- a/pages/manage_and_operate/api/services/guide.fr-ca.md
+++ b/pages/manage_and_operate/api/services/guide.fr-ca.md
@@ -18,8 +18,8 @@ La route d'API **/service** regroupe les actions communes à tous types de servi
## Prérequis
-* Être connecté aux [API OVHcloud](/links/api){.external}.
-* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}.
+* Être connecté aux [API OVHcloud](/links/api).
+* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps).
* Avoir un compte client avec un tag Reseller (contactez votre commercial pour connaître votre éligibilité le cas échéant).
## En pratique
diff --git a/pages/manage_and_operate/api/services/guide.fr-fr.md b/pages/manage_and_operate/api/services/guide.fr-fr.md
index 9f7fae9a4bc..dfdbab3741e 100644
--- a/pages/manage_and_operate/api/services/guide.fr-fr.md
+++ b/pages/manage_and_operate/api/services/guide.fr-fr.md
@@ -18,8 +18,8 @@ La route d'API **/service** regroupe les actions communes à tous types de servi
## Prérequis
-* Être connecté aux [API OVHcloud](/links/api){.external}.
-* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}.
+* Être connecté aux [API OVHcloud](/links/api).
+* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps).
* Avoir un compte client avec un tag Reseller (contactez votre commercial pour connaître votre éligibilité le cas échéant).
## En pratique
diff --git a/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md b/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md
index b47dfd89927..d7e89770cae 100644
--- a/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md
+++ b/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md
@@ -105,7 +105,7 @@ Elements that can be pushed to Logs Data Platform:
|iam_operation|IAM action evalutated|
|iam_identities|IAM identity used for rights evaluation|
|kmip_operation|KMIP operation used|
-|kmip_reason|[Standard KMIP error code](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D){.external}|
+|kmip_reason|[Standard KMIP error code](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D)|
## Go further
diff --git a/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md b/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md
index 23a0b7889a9..c09149b69bc 100644
--- a/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md
+++ b/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md
@@ -106,7 +106,7 @@ Les éléments pouvant être transmis à Logs Data Platform étant :
|iam_operation|Action IAM évaluée|
|iam_identities|Identitée IAM utilisé pour l'évaluation des droits|
|kmip_operation|Opération KMIP utilisée|
-|kmip_reason|[code d'erreur KMIP](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D){.external}|
+|kmip_reason|[code d'erreur KMIP](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D)|
## Aller plus loin
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md
index 27f066c7a61..b0e95d455ef 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md
@@ -6,13 +6,13 @@ updated: 2024-08-07
## Objective
-[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
+[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform.
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- A machine on which you will deploy ElastAlert.
- Some data on an alias or an index.
@@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps:
### Installation
-Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
+Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation.
You can either install the latest released version of ElastAlert 2 using pip:
@@ -131,7 +131,7 @@ alert_time_limit:
days: 2
```
-You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}.
+You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring).
- **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start.
- **run_every** is how often ElastAlert will query OpenSearch.
@@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io
### Rules configuration
-In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
+In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**.
```yaml
name: Example frequency rule
@@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule
```
-ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}.
+ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts).
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md
index 684d13b115f..7eddef38346 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md
@@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website.
#### Apache Server Configuration
-We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
+We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample:
```ApacheConf
@@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer
{.thumbnail}
-Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running:
+Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running:
```shell-session
$ ldp@ubuntu:~$ sudo filebeat modules enable apache
@@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md
index 0435e773152..148e27ce7b1 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md
@@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream
On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options:
-- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}.
+- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html).
- The retention duration of your archives (from one year to ten years).
- The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case)
- The activation of the notification for each new archive available.
@@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download
#### Using the API
-If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}.
+If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api).
You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager.
@@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc
#### Using ldp-archive-mirror
To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror**
-The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external}
+The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror)
#### Content of the archive
-The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
+The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system.
```json
{"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9}
@@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md
index 472461ecae8..14c9a522e43 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md
@@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
-- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
+- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md
index 381d88a01d0..fa3049cbe45 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md
@@ -5,7 +5,7 @@ updated: 2024-08-07
## Objective
-[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
+[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options.
This guide will help you to query your logs from the Bonfire command line tool.
@@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api:
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md
index d25f909f0ad..6a44653aed3 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md
@@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t
### Download and test ldp-tail in two minutes
-**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
+**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary.
You can test it right away on our demo stream by using this command in a terminal.
@@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa
### Formatting and Filtering
-**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output.
+**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output.
#### The pattern option
@@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re
My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway.
```
-Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
+Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool.
The pattern option allows you also to customize colors, background and text colors are customizable.
@@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p
#### The match option
-As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
+As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value.
Here is how you can display only logs that have a title beginning with the word "another"
@@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim
$ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}"
```
-You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps.
+You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md
index 97b9fb19a76..3753a511da9 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md
@@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform
### What is a valid log for Logs Data Platform?
-Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
+Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to.
This format impose a few conventions that if you don't follow can have many consequences:
@@ -107,7 +107,7 @@ will become:
}
```
-So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms).
Happy Logging
@@ -115,5 +115,5 @@ Happy Logging
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md
index 64a210e54bc..094e94436fc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md
@@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start
### Welcome to Logs Data Platform
-First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
+First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use.
- Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header.
- Once you have created your credentials, the main interface will appear:
@@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features:
Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available
-- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC
-5424](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+5424](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport.
@@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th
{.thumbnail}
-To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
+To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too.
*GELF*:
@@ -141,7 +141,7 @@ helps going
Giving you all the messages that contains the terms `helps` and `going`.
-Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}.
+Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html).
Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs.
@@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo
- [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform.
- [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them.
- [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards)
-- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external}
+- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries)
- Documentation: [Guides](/products/observability-logs-data-platform)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
-- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md
index 33e71ebfaa9..49ff3207f39 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md
index 4212aa05b50..1323ddd4f96 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
\ No newline at end of file
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md
index 4258ccc0425..dea646ab332 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md
@@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
| Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA |
-| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | |
+| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | |
#### 2.3. Customer Information System setup
@@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
| **Activity** | **Customer** | **OVHcloud** |
| --- | --- | --- |
-| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | |
+| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | |
| Migrate/transfer data | RA | |
### 5. End of service
@@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md
index c77eeec11d5..2ad2f2ba722 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few exemples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few exemples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md
index c77eeec11d5..2ad2f2ba722 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few exemples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few exemples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md
index 5b0384f4950..26d4de7a02e 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md
@@ -7,7 +7,7 @@ updated: 2022-07-28
## Overview
Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner.
-Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights.
+Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights.
## Creating a Role
@@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different
## Using API
-Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}.
+Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs).
Here are a few examples of the role API calls you can use:
@@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use:
- `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share.
-Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console.
+Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console.
## Go further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md
index c5cd07bbe0d..391208f34be 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md
@@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract
This guide will present you with three non-intrusive ways to send logs to the Logs Data platform:
- ask Apache to pipe log entries directly to the platform.
-- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs
-- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module
+- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs
+- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module
## Requirements
In order to follow this guide you will need:
- The openssl package: as we are using it to send the logs securely.
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
@@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here
|runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime|
|apptime_num|Response time from the upstream server|-|$upstream_response_time|
-The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external}
+The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html)
### Using Filebeat
@@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md
index 24e4c01de66..73bf5afee54 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are its resilient protocol to send logs, and a variety of ready-to-use modules for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md
index 8fc5d85ad78..0c10b0739cc 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md
@@ -5,7 +5,7 @@ updated: 2024-11-28
## Objective
-[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform.
The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications.
@@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
## Instructions
### Setup Filebeat OSS 7.X in your system
-Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external}
+Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat)
-You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately.
+You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately.
-For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution.
+For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution.
The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**).
@@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb
> [!warning]
> Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch.
-> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}.
+> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats).
### Configure Filebeat OSS 7.X on your system
-In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}.
+In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html).
Filebeat expects a configuration file named **filebeat.yml** .
@@ -216,7 +216,7 @@ output.elasticsearch:
This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index.
-When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs.
+When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs.
#### Enable Apache Filebeat module
@@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source
Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool.
-- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external}
-- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external}
+- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html)
+- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html)
- Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input)
## Going further
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md
index b276ae04ce3..f9e5b0d2029 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods' logs in one place and analyze them easily? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods' logs in one place and analyze them easily? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md
index 38412b2e980..d633fb3cf00 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md
@@ -8,13 +8,13 @@ updated: 2024-07-18
In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
-[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}.
+[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes).
## Requirements
Note that in order to complete this tutorial, you should have at least:
-- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external}
+- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29)
- [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- A working kubernetes cluster with some pods already logging to stdout.
- 15 minutes.
@@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md
index 23543a960ce..35e4585183a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md
@@ -5,7 +5,7 @@ updated: 2025-04-25
## Objective
-[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}.
+[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash).
This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform.
@@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send
The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available.
#### Logstash Plugins
-For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}.
+For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms).
##### Inputs plugins
@@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{
Here are some links to help you go further with Logstash
-- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external}
-- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external}
+- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
- [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat)
-- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external}
-- [A Ruby regular expression editor](https://rubular.com/){.external}
+- [Grok Constructor](http://grokconstructor.appspot.com/do/match)
+- [A Ruby regular expression editor](https://rubular.com/)
That's all you need to know about the Logstash Collector on Logs Data Platform.
@@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform.
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](/links/manage-operate/ldp)
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md
index 21a816cee78..94b35495c8a 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md
index 6942b7c5d82..b035abbd503 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md
@@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa
The log formats that are accepted by Logs Data Platform are the following:
-- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter.
-- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter.
-- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}.
-- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}.
-- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}).
+- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter.
+- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter.
+- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424).
+- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/).
+- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)).
### Mutualized vs Dedicated inputs
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf
{.thumbnail}
The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`.
-Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example:
+Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }'
@@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a
## Use case: Vector
-[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
+[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module.
-The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities.
+The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities.
```toml
data_dir = "/var/lib/vector" # optional, must be allowed in read-write
@@ -81,11 +81,11 @@ auth.password = ""
Here is the explanation of this configuration.
-The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
+The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir.
-The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
+The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream.
-The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points:
+The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points:
- gzip is supported on our endpoint, so it's activated with the **compression** configuration.
- **healthcheck** is also supported and allows you to be sure that the platform is alive and well
@@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ
- Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start)
- Documentation: [Guides](/products/observability-logs-data-platform)
-- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external}
+- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms)
- Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)))
diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md
index 588bb12a287..e6c34b4fde6 100644
--- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md
+++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md
@@ -6,11 +6,11 @@ updated: 2024-06-29
## Overview
-OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
+OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start).
## OpenSearch endpoint
-The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send:
+The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send:
```shell-session
$ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}'
@@ -21,7 +21,7 @@ Replace the ``, `` and `