diff --git a/pages/manage_and_operate/api/account/guide.en-asia.md b/pages/manage_and_operate/api/account/guide.en-asia.md index c8be1605498..f4b73798009 100644 --- a/pages/manage_and_operate/api/account/guide.en-asia.md +++ b/pages/manage_and_operate/api/account/guide.en-asia.md @@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). @@ -55,7 +55,7 @@ With the previously created ConsumerKey. * email : add an email address for this user * login : enter a relevant string for this user -* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}. +* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords). ## Go further diff --git a/pages/manage_and_operate/api/account/guide.en-au.md b/pages/manage_and_operate/api/account/guide.en-au.md index c8be1605498..f4b73798009 100644 --- a/pages/manage_and_operate/api/account/guide.en-au.md +++ b/pages/manage_and_operate/api/account/guide.en-au.md @@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). @@ -55,7 +55,7 @@ With the previously created ConsumerKey. * email : add an email address for this user * login : enter a relevant string for this user -* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}. +* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords). ## Go further diff --git a/pages/manage_and_operate/api/account/guide.en-ca.md b/pages/manage_and_operate/api/account/guide.en-ca.md index c8be1605498..f4b73798009 100644 --- a/pages/manage_and_operate/api/account/guide.en-ca.md +++ b/pages/manage_and_operate/api/account/guide.en-ca.md @@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). @@ -55,7 +55,7 @@ With the previously created ConsumerKey. * email : add an email address for this user * login : enter a relevant string for this user -* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}. +* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords). ## Go further diff --git a/pages/manage_and_operate/api/account/guide.en-gb.md b/pages/manage_and_operate/api/account/guide.en-gb.md index b90d1d78b6e..7a08d141f37 100644 --- a/pages/manage_and_operate/api/account/guide.en-gb.md +++ b/pages/manage_and_operate/api/account/guide.en-gb.md @@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions @@ -54,7 +54,7 @@ With the previously created ConsumerKey. * email : add an email address for this user * login : enter a relevant string for this user -* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}. +* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords). ## Go further diff --git a/pages/manage_and_operate/api/account/guide.en-ie.md b/pages/manage_and_operate/api/account/guide.en-ie.md index b90d1d78b6e..7a08d141f37 100644 --- a/pages/manage_and_operate/api/account/guide.en-ie.md +++ b/pages/manage_and_operate/api/account/guide.en-ie.md @@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions @@ -54,7 +54,7 @@ With the previously created ConsumerKey. * email : add an email address for this user * login : enter a relevant string for this user -* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}. +* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords). ## Go further diff --git a/pages/manage_and_operate/api/account/guide.en-sg.md b/pages/manage_and_operate/api/account/guide.en-sg.md index c8be1605498..f4b73798009 100644 --- a/pages/manage_and_operate/api/account/guide.en-sg.md +++ b/pages/manage_and_operate/api/account/guide.en-sg.md @@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). @@ -55,7 +55,7 @@ With the previously created ConsumerKey. * email : add an email address for this user * login : enter a relevant string for this user -* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}. +* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords). ## Go further diff --git a/pages/manage_and_operate/api/account/guide.en-us.md b/pages/manage_and_operate/api/account/guide.en-us.md index c8be1605498..f4b73798009 100644 --- a/pages/manage_and_operate/api/account/guide.en-us.md +++ b/pages/manage_and_operate/api/account/guide.en-us.md @@ -12,8 +12,8 @@ This guide will also show you how to add one or more logins to this sub-account ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). @@ -55,7 +55,7 @@ With the previously created ConsumerKey. * email : add an email address for this user * login : enter a relevant string for this user -* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external}. +* password : it must meet the requirements of [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) and be valid by testing it on [Pwned Passwords](https://haveibeenpwned.com/Passwords). ## Go further diff --git a/pages/manage_and_operate/api/account/guide.fr-ca.md b/pages/manage_and_operate/api/account/guide.fr-ca.md index fb4d1c56e49..4acb4bbfacb 100644 --- a/pages/manage_and_operate/api/account/guide.fr-ca.md +++ b/pages/manage_and_operate/api/account/guide.fr-ca.md @@ -13,8 +13,8 @@ Ce guide vous permettra aussi d'ajouter un ou des logins a ce sous-compte pour l ## Prérequis -* Être connecté aux [API OVHcloud](/links/api){.external}. -* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}. +* Être connecté aux [API OVHcloud](/links/api). +* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps). * Avoir un compte client avec un tag Reseller (Contactez votre commercial pour connaitre votre éligibilité le cas échéant). ## En pratique @@ -55,7 +55,7 @@ Avec la ConsumerKey précédemment obtenue * email : ajoutez une adresse mail pour cet utilisateur * login : renseignez une chaîne de caractères -* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external} . +* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords) . ## Aller plus loin diff --git a/pages/manage_and_operate/api/account/guide.fr-fr.md b/pages/manage_and_operate/api/account/guide.fr-fr.md index fb4d1c56e49..4acb4bbfacb 100644 --- a/pages/manage_and_operate/api/account/guide.fr-fr.md +++ b/pages/manage_and_operate/api/account/guide.fr-fr.md @@ -13,8 +13,8 @@ Ce guide vous permettra aussi d'ajouter un ou des logins a ce sous-compte pour l ## Prérequis -* Être connecté aux [API OVHcloud](/links/api){.external}. -* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}. +* Être connecté aux [API OVHcloud](/links/api). +* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps). * Avoir un compte client avec un tag Reseller (Contactez votre commercial pour connaitre votre éligibilité le cas échéant). ## En pratique @@ -55,7 +55,7 @@ Avec la ConsumerKey précédemment obtenue * email : ajoutez une adresse mail pour cet utilisateur * login : renseignez une chaîne de caractères -* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn){.external} et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords){.external} . +* password : celui-ci doit répondre aux exigences de [zxcvbn: Low-Budget Password Strength Estimation](https://github.com/dropbox/zxcvbn) et être valide en le testant sur [Pwned Passwords](https://haveibeenpwned.com/Passwords) . ## Aller plus loin diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md index 4cef4d94ca8..ebd380b4848 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-asia.md @@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you, The first part, as the application developer, is to register your application on OVHcloud. -To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}. +To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/). You will need to log in and set an application name and description. @@ -111,5 +111,5 @@ Happy development ! ## Go further -- [API Console](/links/api){.external} +- [API Console](/links/api) diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md index 4cef4d94ca8..ebd380b4848 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-au.md @@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you, The first part, as the application developer, is to register your application on OVHcloud. -To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}. +To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/). You will need to log in and set an application name and description. @@ -111,5 +111,5 @@ Happy development ! ## Go further -- [API Console](/links/api){.external} +- [API Console](/links/api) diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md index 4cef4d94ca8..ebd380b4848 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-ca.md @@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you, The first part, as the application developer, is to register your application on OVHcloud. -To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}. +To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/). You will need to log in and set an application name and description. @@ -111,5 +111,5 @@ Happy development ! ## Go further -- [API Console](/links/api){.external} +- [API Console](/links/api) diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md index 1872571a0f0..d30ebbc9433 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-gb.md @@ -112,5 +112,5 @@ Happy development ! ## Go further -- [API Console](/links/api){.external} +- [API Console](/links/api) diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md index 1872571a0f0..d30ebbc9433 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-ie.md @@ -112,5 +112,5 @@ Happy development ! ## Go further -- [API Console](/links/api){.external} +- [API Console](/links/api) diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md index 4cef4d94ca8..ebd380b4848 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-sg.md @@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you, The first part, as the application developer, is to register your application on OVHcloud. -To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}. +To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/). You will need to log in and set an application name and description. @@ -111,5 +111,5 @@ Happy development ! ## Go further -- [API Console](/links/api){.external} +- [API Console](/links/api) diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md b/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md index 4cef4d94ca8..ebd380b4848 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.en-us.md @@ -20,7 +20,7 @@ As an example, let's assume that you want to create a marketplace in which you, The first part, as the application developer, is to register your application on OVHcloud. -To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/){.external}. +To do so, go to [OVHcloud API](https://ca.api.ovh.com/createApp/). You will need to log in and set an application name and description. @@ -111,5 +111,5 @@ Happy development ! ## Go further -- [API Console](/links/api){.external} +- [API Console](/links/api) diff --git a/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md b/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md index 276ffb58c8d..4297aab4572 100644 --- a/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md +++ b/pages/manage_and_operate/api/api_right_delegation/guide.fr-ca.md @@ -19,7 +19,7 @@ Par exemple, supposons que vous voulez créer un marché dans lequel vous, en ta La première partie, en tant que développeur d'applications, consiste à enregistrer votre application sur OVHcloud. -Pour ce faire, accédez à l'[API OVHcloud](https://ca.api.ovh.com/createApp/){.external} +Pour ce faire, accédez à l'[API OVHcloud](https://ca.api.ovh.com/createApp/) Vous devrez vous connecter et définir un nom et une description de l'application. diff --git a/pages/manage_and_operate/api/apiv2/guide.de-de.md b/pages/manage_and_operate/api/apiv2/guide.de-de.md index 829c9a0fa28..acc8c9f963d 100644 --- a/pages/manage_and_operate/api/apiv2/guide.de-de.md +++ b/pages/manage_and_operate/api/apiv2/guide.de-de.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.en-asia.md b/pages/manage_and_operate/api/apiv2/guide.en-asia.md index a266ee53440..f6de8723804 100644 --- a/pages/manage_and_operate/api/apiv2/guide.en-asia.md +++ b/pages/manage_and_operate/api/apiv2/guide.en-asia.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.en-au.md b/pages/manage_and_operate/api/apiv2/guide.en-au.md index a266ee53440..f6de8723804 100644 --- a/pages/manage_and_operate/api/apiv2/guide.en-au.md +++ b/pages/manage_and_operate/api/apiv2/guide.en-au.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.en-ca.md b/pages/manage_and_operate/api/apiv2/guide.en-ca.md index a266ee53440..f6de8723804 100644 --- a/pages/manage_and_operate/api/apiv2/guide.en-ca.md +++ b/pages/manage_and_operate/api/apiv2/guide.en-ca.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.en-gb.md b/pages/manage_and_operate/api/apiv2/guide.en-gb.md index 829c9a0fa28..acc8c9f963d 100644 --- a/pages/manage_and_operate/api/apiv2/guide.en-gb.md +++ b/pages/manage_and_operate/api/apiv2/guide.en-gb.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.en-ie.md b/pages/manage_and_operate/api/apiv2/guide.en-ie.md index 829c9a0fa28..acc8c9f963d 100644 --- a/pages/manage_and_operate/api/apiv2/guide.en-ie.md +++ b/pages/manage_and_operate/api/apiv2/guide.en-ie.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.en-sg.md b/pages/manage_and_operate/api/apiv2/guide.en-sg.md index a266ee53440..f6de8723804 100644 --- a/pages/manage_and_operate/api/apiv2/guide.en-sg.md +++ b/pages/manage_and_operate/api/apiv2/guide.en-sg.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.en-us.md b/pages/manage_and_operate/api/apiv2/guide.en-us.md index a266ee53440..f6de8723804 100644 --- a/pages/manage_and_operate/api/apiv2/guide.en-us.md +++ b/pages/manage_and_operate/api/apiv2/guide.en-us.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.es-es.md b/pages/manage_and_operate/api/apiv2/guide.es-es.md index 829c9a0fa28..acc8c9f963d 100644 --- a/pages/manage_and_operate/api/apiv2/guide.es-es.md +++ b/pages/manage_and_operate/api/apiv2/guide.es-es.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.es-us.md b/pages/manage_and_operate/api/apiv2/guide.es-us.md index a266ee53440..f6de8723804 100644 --- a/pages/manage_and_operate/api/apiv2/guide.es-us.md +++ b/pages/manage_and_operate/api/apiv2/guide.es-us.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://ca.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://ca.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.fr-ca.md b/pages/manage_and_operate/api/apiv2/guide.fr-ca.md index 08cef835cd2..f5f9c030195 100644 --- a/pages/manage_and_operate/api/apiv2/guide.fr-ca.md +++ b/pages/manage_and_operate/api/apiv2/guide.fr-ca.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objectif -Les API disponibles sur [https://ca.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. +Les API disponibles sur [https://ca.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. Historiquement, les API d'OVHcloud sont disponibles sous la branche **/1.0** correspondant à la première version de l'API que nous avons publiée. -Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2){.external}. +Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://ca.api.ovh.com/v2](https://ca.api.ovh.com/console-preview/?branch=v2). Cette nouvelle branche regroupera des nouvelles routes d'API, retravaillées sous un nouveau format, et deviendra la branche d'API principale pour les nouveaux développements de fonctionnalités de produits OVHcloud.
La branche **/1.0** continuera d'exister en parallèle de la branche **/v2** mais ne contiendra pas la même fonctionnalité. En tant que client, vous pourrez consommer des API de la branche **/1.0** et **/v2** simultanément dans vos programmes, tout en conservant la même authentification et les mêmes outils pour appeler l'API. Afin de standardiser le nommage de nos branches d'API, la branche **/1.0** est également disponible à travers l'alias **/v1**. @@ -51,7 +51,7 @@ Lors de la sortie d'une nouvelle version majeure, nous ferons une évaluation de #### Récupérer les versions disponibles via la console -Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://ca.api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). Les différentes versions sont affichées dans la section **SCHEMAS VERSION**. Vous pouvez ensuite sélectionner une version pour voir les schémas d'API associés. @@ -64,9 +64,9 @@ Deux approches opposées existent pour voir l'état courant d'une ressource à t - **Approche centrée sur le processus** : l'API expose l'état courant des ressources (par exemple une instance Public Cloud) et offre des opérations pour les modifier (par exemple, changer la taille d'un disque). - **Approche centrée sur les ressources** : l'API expose à la fois l'état courant des ressources ainsi que l'état souhaité. Les modifications se font directement en mettant à jour l'état souhaité des ressources. Dans ce cas, l'API effectue elle-même les actions nécessaires pour atteindre l'état ciblé. -La première approche est celle utilisée par l'API actuelle : [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1){.external}. +La première approche est celle utilisée par l'API actuelle : [https://ca.api.ovh.com/v1](https://ca.api.ovh.com/v1). -L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io){.external}. Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client. +L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io). Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client. ### Gestion asynchrone et évènements @@ -154,9 +154,9 @@ L'absence de l'en-tête `X-Pagination-Cursor-Next` dans une réponse d'API conte Plusieurs bibliothèques sont disponibles pour utiliser les API OVHcloud : -- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Aller plus loin diff --git a/pages/manage_and_operate/api/apiv2/guide.fr-fr.md b/pages/manage_and_operate/api/apiv2/guide.fr-fr.md index 9ed8db541c5..c1019992051 100644 --- a/pages/manage_and_operate/api/apiv2/guide.fr-fr.md +++ b/pages/manage_and_operate/api/apiv2/guide.fr-fr.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objectif -Les API disponibles sur [https://eu.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. +Les API disponibles sur [https://eu.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. Historiquement, les API d'OVHcloud sont disponibles sous la branche **/1.0** correspondant à la première version de l'API que nous avons publiée. -Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +Une nouvelle branche des API OVHcloud est disponible sous le préfixe **/v2** sur [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). Cette nouvelle branche regroupera des nouvelles routes d'API, retravaillées sous un nouveau format, et deviendra la branche d'API principale pour les nouveaux développements de fonctionnalités de produits OVHcloud.
La branche **/1.0** continuera d'exister en parallèle de la branche **/v2** mais ne contiendra pas la même fonctionnalité. En tant que client, vous pourrez consommer des API de la branche **/1.0** et **/v2** simultanément dans vos programmes, tout en conservant la même authentification et les mêmes outils pour appeler l'API. Afin de standardiser le nommage de nos branches d'API, la branche **/1.0** est également disponible à travers l'alias **/v1**. @@ -51,7 +51,7 @@ Lors de la sortie d'une nouvelle version majeure, nous ferons une évaluation de #### Récupérer les versions disponibles via la console -Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +Il est possible de voir la liste des versions disponible sur la console de l'API OVHcloud. Pour cela, ouvrez la [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). Les différentes versions sont affichées dans la section **SCHEMAS VERSION**. Vous pouvez ensuite sélectionner une version pour voir les schémas d'API associés. @@ -64,9 +64,9 @@ Deux approches opposées existent pour voir l'état courant d'une ressource à t - **Approche centrée sur le processus** : l'API expose l'état courant des ressources (par exemple une instance Public Cloud) et offre des opérations pour les modifier (par exemple, changer la taille d'un disque). - **Approche centrée sur les ressources** : l'API expose à la fois l'état courant des ressources ainsi que l'état souhaité. Les modifications se font directement en mettant à jour l'état souhaité des ressources. Dans ce cas, l'API effectue elle-même les actions nécessaires pour atteindre l'état ciblé. -La première approche est celle utilisée par l'API actuelle : [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +La première approche est celle utilisée par l'API actuelle : [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io){.external}. Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client. +L'APIv2 utilise l'approche centrée sur les ressources, qui la rend plus facilement utilisable « *as-code* », notamment à travers des outils tels que [Terraform](https://www.terraform.io). Ce fonctionnement permet également d'abstraire toute la complexité du processus de transformation d'une ressource d'un état à un autre puisqu'il est à la charge de l'API et non du client. ### Gestion asynchrone et évènements @@ -154,9 +154,9 @@ L'absence de l'en-tête `X-Pagination-Cursor-Next` dans une réponse d'API conte Plusieurs bibliothèques sont disponibles pour utiliser les API OVHcloud : -- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go : [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python : [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP : [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Aller plus loin diff --git a/pages/manage_and_operate/api/apiv2/guide.it-it.md b/pages/manage_and_operate/api/apiv2/guide.it-it.md index 829c9a0fa28..acc8c9f963d 100644 --- a/pages/manage_and_operate/api/apiv2/guide.it-it.md +++ b/pages/manage_and_operate/api/apiv2/guide.it-it.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.pl-pl.md b/pages/manage_and_operate/api/apiv2/guide.pl-pl.md index 829c9a0fa28..acc8c9f963d 100644 --- a/pages/manage_and_operate/api/apiv2/guide.pl-pl.md +++ b/pages/manage_and_operate/api/apiv2/guide.pl-pl.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/apiv2/guide.pt-pt.md b/pages/manage_and_operate/api/apiv2/guide.pt-pt.md index 829c9a0fa28..acc8c9f963d 100644 --- a/pages/manage_and_operate/api/apiv2/guide.pt-pt.md +++ b/pages/manage_and_operate/api/apiv2/guide.pt-pt.md @@ -6,11 +6,11 @@ updated: 2023-04-17 ## Objective -The APIs available at [https://eu.api.ovh.com/](/links/api){.external} allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. +The APIs available at [https://eu.api.ovh.com/](/links/api) allow you to buy, manage, update and configure OVHcloud products without using a graphical interface like the OVHcloud Control Panel. Historically, OVHcloud APIs have been available under the **/1.0** branch corresponding to the first version of the API that we published. -A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2){.external}. +A new section of the OVHcloud APIs is available under the prefix **/v2** on [https://eu.api.ovh.com/v2](https://api.ovh.com/console-preview/?branch=v2). This new branch will bring together new API routes, reworked in a new format, and become the main API branch for new feature developments of OVHcloud products.
The **/1.0** branch will continue to exist in parallel to the **/v2** branch but will not contain the same functionality. As a customer, you can consume APIs from branch **/1.0** and **/v2** simultaneously in your programs, while retaining the same authentication and tools to call the API. To standardise the naming of our API branches, the **/1.0** branch is also available via the **/v1** alias. @@ -52,7 +52,7 @@ When a major new version is released, we will evaluate the impact of this new ve #### Retrieve available versions via the console -You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers){.external}. +You can see the list of versions available on the OVHcloud API console. To do this, open the [console](https://api.ovh.com/console-preview/?section=%2Fiam&branch=v2#servers). The different versions are displayed in the **SCHEMAS VERSION** section. You can then select a version to view the associated API schemas. @@ -65,9 +65,9 @@ There are two opposing approaches to seeing the current state of a resource thro - **Process-centred approach**: The API exposes the current state of resources (for example, a Public Cloud instance) and offers operations for modifying them (for example, changing the size of a disk). - **Resource-centred approach**: The API exposes both the current state of resources and the desired state. Changes are made directly by updating the desired state of the resources. In this case, the API takes the necessary actions itself to reach the targeted state. -The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1){.external}. +The first approach is the one used by the current API: [https://eu.api.ovh.com/v1](https://eu.api.ovh.com/v1). -The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io){.external}. This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. +The APIv2 uses the resource-centric approach, which makes it easier to use *as-code*, particularly with tools like [Terraform](https://www.terraform.io). This operation also abstracts all the complexity of the process of transforming a resource from one state to another since it is the responsibility of the API and not the customer. ### Asynchronous management and events @@ -155,9 +155,9 @@ The absence of the `X-Pagination-Cursor-Next` header in an API response containi Several libraries are available to use the OVHcloud APIs: -- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh){.external} -- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh){.external} -- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh){.external} +- Go: [https://github.com/ovh/go-ovh](https://github.com/ovh/go-ovh) +- Python: [https://github.com/ovh/python-ovh](https://github.com/ovh/python-ovh) +- PHP: [https://github.com/ovh/php-ovh](https://github.com/ovh/php-ovh) ## Go further diff --git a/pages/manage_and_operate/api/console-preview/guide.de-de.md b/pages/manage_and_operate/api/console-preview/guide.de-de.md index f5efaf817e6..285f64db3bc 100644 --- a/pages/manage_and_operate/api/console-preview/guide.de-de.md +++ b/pages/manage_and_operate/api/console-preview/guide.de-de.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.en-asia.md b/pages/manage_and_operate/api/console-preview/guide.en-asia.md index c88d58b3f58..ce0f74fdab8 100644 --- a/pages/manage_and_operate/api/console-preview/guide.en-asia.md +++ b/pages/manage_and_operate/api/console-preview/guide.en-asia.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.en-au.md b/pages/manage_and_operate/api/console-preview/guide.en-au.md index c88d58b3f58..ce0f74fdab8 100644 --- a/pages/manage_and_operate/api/console-preview/guide.en-au.md +++ b/pages/manage_and_operate/api/console-preview/guide.en-au.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.en-ca.md b/pages/manage_and_operate/api/console-preview/guide.en-ca.md index c88d58b3f58..ce0f74fdab8 100644 --- a/pages/manage_and_operate/api/console-preview/guide.en-ca.md +++ b/pages/manage_and_operate/api/console-preview/guide.en-ca.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.en-gb.md b/pages/manage_and_operate/api/console-preview/guide.en-gb.md index 87c4a854c39..09c89332005 100644 --- a/pages/manage_and_operate/api/console-preview/guide.en-gb.md +++ b/pages/manage_and_operate/api/console-preview/guide.en-gb.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.en-ie.md b/pages/manage_and_operate/api/console-preview/guide.en-ie.md index 87c4a854c39..09c89332005 100644 --- a/pages/manage_and_operate/api/console-preview/guide.en-ie.md +++ b/pages/manage_and_operate/api/console-preview/guide.en-ie.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.en-sg.md b/pages/manage_and_operate/api/console-preview/guide.en-sg.md index c88d58b3f58..ce0f74fdab8 100644 --- a/pages/manage_and_operate/api/console-preview/guide.en-sg.md +++ b/pages/manage_and_operate/api/console-preview/guide.en-sg.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.en-us.md b/pages/manage_and_operate/api/console-preview/guide.en-us.md index c88d58b3f58..ce0f74fdab8 100644 --- a/pages/manage_and_operate/api/console-preview/guide.en-us.md +++ b/pages/manage_and_operate/api/console-preview/guide.en-us.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.es-es.md b/pages/manage_and_operate/api/console-preview/guide.es-es.md index f5efaf817e6..285f64db3bc 100644 --- a/pages/manage_and_operate/api/console-preview/guide.es-es.md +++ b/pages/manage_and_operate/api/console-preview/guide.es-es.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.es-us.md b/pages/manage_and_operate/api/console-preview/guide.es-us.md index 2347a5e1e50..e255f253982 100644 --- a/pages/manage_and_operate/api/console-preview/guide.es-us.md +++ b/pages/manage_and_operate/api/console-preview/guide.es-us.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.fr-ca.md b/pages/manage_and_operate/api/console-preview/guide.fr-ca.md index 07ebf8abbc8..95b05494a20 100644 --- a/pages/manage_and_operate/api/console-preview/guide.fr-ca.md +++ b/pages/manage_and_operate/api/console-preview/guide.fr-ca.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objectif -Les API disponibles sur [https://ca.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. +Les API disponibles sur [https://ca.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. **Découvrez comment explorer les API OVHcloud à travers notre nouvelle console.** ## Prérequis - Disposer d'un compte OVHcloud actif et connaître ses identifiants. -- Être sur la page web des [API OVHcloud](/links/api){.external}. +- Être sur la page web des [API OVHcloud](/links/api). ## En pratique diff --git a/pages/manage_and_operate/api/console-preview/guide.fr-fr.md b/pages/manage_and_operate/api/console-preview/guide.fr-fr.md index f18afa27916..826222d6fb6 100644 --- a/pages/manage_and_operate/api/console-preview/guide.fr-fr.md +++ b/pages/manage_and_operate/api/console-preview/guide.fr-fr.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objectif -Les API disponibles sur [https://eu.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. +Les API disponibles sur [https://eu.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. **Découvrez comment explorer les API OVHcloud à travers notre nouvelle console.** ## Prérequis - Disposer d'un compte OVHcloud actif et connaître ses identifiants. -- Être sur la page web des [API OVHcloud](/links/api){.external}. +- Être sur la page web des [API OVHcloud](/links/api). ## En pratique diff --git a/pages/manage_and_operate/api/console-preview/guide.it-it.md b/pages/manage_and_operate/api/console-preview/guide.it-it.md index f5efaf817e6..285f64db3bc 100644 --- a/pages/manage_and_operate/api/console-preview/guide.it-it.md +++ b/pages/manage_and_operate/api/console-preview/guide.it-it.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.pl-pl.md b/pages/manage_and_operate/api/console-preview/guide.pl-pl.md index f5efaf817e6..285f64db3bc 100644 --- a/pages/manage_and_operate/api/console-preview/guide.pl-pl.md +++ b/pages/manage_and_operate/api/console-preview/guide.pl-pl.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/console-preview/guide.pt-pt.md b/pages/manage_and_operate/api/console-preview/guide.pt-pt.md index f5efaf817e6..285f64db3bc 100644 --- a/pages/manage_and_operate/api/console-preview/guide.pt-pt.md +++ b/pages/manage_and_operate/api/console-preview/guide.pt-pt.md @@ -6,14 +6,14 @@ updated: 2023-03-27 ## Objective -The APIs available on [https://eu.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. +The APIs available on [https://eu.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the OVHcloud Control Panel. **Discover how to explore the OVHcloud APIs on our brand new console** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md b/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md index 2b7b377ca9a..e6545b4d4f1 100644 --- a/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md +++ b/pages/manage_and_operate/api/enterprise-payment/guide.en-ca.md @@ -10,10 +10,10 @@ We will describe part of your payment and billing cycle at OVHcloud. ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). -* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account){.external}. +* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account). * Having at least the Business or Enterprise Support level. ## Instructions diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md b/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md index 2b7b377ca9a..e6545b4d4f1 100644 --- a/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md +++ b/pages/manage_and_operate/api/enterprise-payment/guide.en-gb.md @@ -10,10 +10,10 @@ We will describe part of your payment and billing cycle at OVHcloud. ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). -* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account){.external}. +* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account). * Having at least the Business or Enterprise Support level. ## Instructions diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md b/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md index 2b7b377ca9a..e6545b4d4f1 100644 --- a/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md +++ b/pages/manage_and_operate/api/enterprise-payment/guide.en-ie.md @@ -10,10 +10,10 @@ We will describe part of your payment and billing cycle at OVHcloud. ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). -* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account){.external}. +* Having [created subaccounts for the OVHcloud API if necessary](/pages/manage_and_operate/api/account). * Having at least the Business or Enterprise Support level. ## Instructions diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md b/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md index 9fefedc124f..10fb32a671f 100644 --- a/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md +++ b/pages/manage_and_operate/api/enterprise-payment/guide.fr-ca.md @@ -10,9 +10,9 @@ Nous allons décrire une partie du cycle de gestion de votre paiement et de votr ## Prérequis -* Être connecté aux [API OVHcloud](/links/api){.external}. -* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}. -* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account){.external}. +* Être connecté aux [API OVHcloud](/links/api). +* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps). +* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account). * Avoir a minima le niveau de support de type Business ou Enterprise. ## En pratique diff --git a/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md b/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md index 9fefedc124f..10fb32a671f 100644 --- a/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md +++ b/pages/manage_and_operate/api/enterprise-payment/guide.fr-fr.md @@ -10,9 +10,9 @@ Nous allons décrire une partie du cycle de gestion de votre paiement et de votr ## Prérequis -* Être connecté aux [API OVHcloud](/links/api){.external}. -* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}. -* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account){.external}. +* Être connecté aux [API OVHcloud](/links/api). +* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps). +* Avoir [créé des sous-comptes pour l'API OVHcloud si nécéssaire](/pages/manage_and_operate/api/account). * Avoir a minima le niveau de support de type Business ou Enterprise. ## En pratique diff --git a/pages/manage_and_operate/api/first-steps/guide.de-de.md b/pages/manage_and_operate/api/first-steps/guide.de-de.md index ed5a4dc4db5..c434155e59c 100644 --- a/pages/manage_and_operate/api/first-steps/guide.de-de.md +++ b/pages/manage_and_operate/api/first-steps/guide.de-de.md @@ -10,14 +10,14 @@ updated: 2025-05-13 ## Ziel -Die unter [https://api.ovh.com/](/links/api){.external} verfügbare API erlaubt es Ihnen, OVHcloud Produkte zu bestellen, zu verwalten, zu aktualisieren und zu konfigurieren, ohne ein grafisches Interface wie das Kundencenter zu verwenden. +Die unter [https://api.ovh.com/](/links/api) verfügbare API erlaubt es Ihnen, OVHcloud Produkte zu bestellen, zu verwalten, zu aktualisieren und zu konfigurieren, ohne ein grafisches Interface wie das Kundencenter zu verwenden. **Hier erfahren Sie, wie Sie die OVHcloud API verwenden und mit Ihren Anwendungen verbinden.** ## Voraussetzungen - Sie verfügen über einen aktiven OVHcloud Kunden-Account und dessen Zugangsdaten. -- Sie sind auf der Webseite der [OVHcloud API](/links/api){.external}. +- Sie sind auf der Webseite der [OVHcloud API](/links/api). ## In der praktischen Anwendung @@ -131,7 +131,7 @@ Die Tabs `PHP` und `Python` enthalten die Elemente, die entsprechend der Anwendu Jede Anwendung, die mit der OVHcloud API kommunizieren möchte, muss zuerst freigegeben werden. -Klicken Sie hierzu auf folgenden Link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +Klicken Sie hierzu auf folgenden Link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Geben Sie Ihre Kundenkennung, Ihr Passwort und den Namen Ihrer Anwendung ein. Der Name kann nützlich sein, um anderen Personen Zugriff zu gewähren. diff --git a/pages/manage_and_operate/api/first-steps/guide.en-asia.md b/pages/manage_and_operate/api/first-steps/guide.en-asia.md index e72f3a24aa3..e27c18eb042 100644 --- a/pages/manage_and_operate/api/first-steps/guide.en-asia.md +++ b/pages/manage_and_operate/api/first-steps/guide.en-asia.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. **Learn how to use OVHcloud APIs and how to pair them with your applications.** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions @@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco Any application that wants to communicate with the OVHcloud API must be declared in advance. -To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}. +To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/). Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it. diff --git a/pages/manage_and_operate/api/first-steps/guide.en-au.md b/pages/manage_and_operate/api/first-steps/guide.en-au.md index e72f3a24aa3..e27c18eb042 100644 --- a/pages/manage_and_operate/api/first-steps/guide.en-au.md +++ b/pages/manage_and_operate/api/first-steps/guide.en-au.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. **Learn how to use OVHcloud APIs and how to pair them with your applications.** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions @@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco Any application that wants to communicate with the OVHcloud API must be declared in advance. -To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}. +To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/). Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it. diff --git a/pages/manage_and_operate/api/first-steps/guide.en-ca.md b/pages/manage_and_operate/api/first-steps/guide.en-ca.md index e72f3a24aa3..e27c18eb042 100644 --- a/pages/manage_and_operate/api/first-steps/guide.en-ca.md +++ b/pages/manage_and_operate/api/first-steps/guide.en-ca.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. **Learn how to use OVHcloud APIs and how to pair them with your applications.** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions @@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco Any application that wants to communicate with the OVHcloud API must be declared in advance. -To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}. +To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/). Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it. diff --git a/pages/manage_and_operate/api/first-steps/guide.en-gb.md b/pages/manage_and_operate/api/first-steps/guide.en-gb.md index a587b2ada91..2626fc923fc 100644 --- a/pages/manage_and_operate/api/first-steps/guide.en-gb.md +++ b/pages/manage_and_operate/api/first-steps/guide.en-gb.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objective -The APIs available on [https://api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. +The APIs available on [https://api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. **Learn how to use OVHcloud APIs and how to pair them with your applications.** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions @@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco Any application that wants to communicate with the OVHcloud API must be declared in advance. -To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it. diff --git a/pages/manage_and_operate/api/first-steps/guide.en-ie.md b/pages/manage_and_operate/api/first-steps/guide.en-ie.md index a587b2ada91..2626fc923fc 100644 --- a/pages/manage_and_operate/api/first-steps/guide.en-ie.md +++ b/pages/manage_and_operate/api/first-steps/guide.en-ie.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objective -The APIs available on [https://api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. +The APIs available on [https://api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. **Learn how to use OVHcloud APIs and how to pair them with your applications.** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions @@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco Any application that wants to communicate with the OVHcloud API must be declared in advance. -To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +To do this, click the following link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it. diff --git a/pages/manage_and_operate/api/first-steps/guide.en-sg.md b/pages/manage_and_operate/api/first-steps/guide.en-sg.md index e72f3a24aa3..e27c18eb042 100644 --- a/pages/manage_and_operate/api/first-steps/guide.en-sg.md +++ b/pages/manage_and_operate/api/first-steps/guide.en-sg.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. **Learn how to use OVHcloud APIs and how to pair them with your applications.** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions @@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco Any application that wants to communicate with the OVHcloud API must be declared in advance. -To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}. +To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/). Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it. diff --git a/pages/manage_and_operate/api/first-steps/guide.en-us.md b/pages/manage_and_operate/api/first-steps/guide.en-us.md index e72f3a24aa3..e27c18eb042 100644 --- a/pages/manage_and_operate/api/first-steps/guide.en-us.md +++ b/pages/manage_and_operate/api/first-steps/guide.en-us.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objective -The APIs available on [https://ca.api.ovh.com/](/links/api){.external} allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. +The APIs available on [https://ca.api.ovh.com/](/links/api) allow you to purchase, manage, update and configure OVHcloud products without using a graphical interface such as the Control Panel. **Learn how to use OVHcloud APIs and how to pair them with your applications.** ## Requirements - You have an active OVHcloud account and know its credentials. -- You are on the [OVHcloud API](/links/api){.external} web page. +- You are on the [OVHcloud API](/links/api) web page. ## Instructions @@ -128,7 +128,7 @@ The `PHP` and `Python` tabs contain the elements to be added to your script acco Any application that wants to communicate with the OVHcloud API must be declared in advance. -To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}. +To do this, click the following link: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/). Fill in your OVHcloud customer ID, password, and application name. The name will be useful later if you want to allow others to use it. diff --git a/pages/manage_and_operate/api/first-steps/guide.es-es.md b/pages/manage_and_operate/api/first-steps/guide.es-es.md index 531803ea1fb..a7d4ecf8375 100644 --- a/pages/manage_and_operate/api/first-steps/guide.es-es.md +++ b/pages/manage_and_operate/api/first-steps/guide.es-es.md @@ -10,14 +10,14 @@ updated: 2025-05-13 ## Objetivo -Las API disponibles en [https://api.ovh.com/](/links/api){.external} le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente. +Las API disponibles en [https://api.ovh.com/](/links/api) le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente. **Cómo utilizar las API de OVHcloud y cómo asociarlas a sus aplicaciones** ## Requisitos - Disponer de una cuenta de OVHcloud activa y conocer sus claves de acceso. -- Estar en la página web de las [API de OVHcloud](/links/api){.external}. +- Estar en la página web de las [API de OVHcloud](/links/api). ## Procedimiento @@ -132,7 +132,7 @@ Las pestañas `PHP` y `Python` contienen los elementos que se añadirán al scri Todas las aplicaciones que quieran comunicarse con la API de OVHcloud deben notificarse con antelación. -Para ello, haga clic en el siguiente enlace: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +Para ello, haga clic en el siguiente enlace: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Introduzca su identificador de cliente, su contraseña y el nombre de su aplicación. El nombre será útil más adelante si desea permitir que otras personas lo usen. diff --git a/pages/manage_and_operate/api/first-steps/guide.es-us.md b/pages/manage_and_operate/api/first-steps/guide.es-us.md index 9f40d0f9ee8..9b51a0a9999 100644 --- a/pages/manage_and_operate/api/first-steps/guide.es-us.md +++ b/pages/manage_and_operate/api/first-steps/guide.es-us.md @@ -10,14 +10,14 @@ updated: 2025-05-13 ## Objetivo -Las API disponibles en [https://ca.api.ovh.com/](/links/api){.external} le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente. +Las API disponibles en [https://ca.api.ovh.com/](/links/api) le permiten adquirir, gestionar, actualizar y configurar productos de OVHcloud sin utilizar una interfaz gráfica como el área de cliente. **Cómo utilizar las API de OVHcloud y cómo asociarlas a sus aplicaciones** ## Requisitos - Disponer de una cuenta de OVHcloud activa y conocer sus claves de acceso. -- Estar en la página web de las [API de OVHcloud](/links/api){.external}. +- Estar en la página web de las [API de OVHcloud](/links/api). ## Procedimiento @@ -132,7 +132,7 @@ Las pestañas `PHP` y `Python` contienen los elementos que se añadirán al scri Todas las aplicaciones que quieran comunicarse con la API de OVHcloud deben notificarse con antelación. -Para ello, haga clic en el siguiente enlace: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}. +Para ello, haga clic en el siguiente enlace: [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/). Introduzca su identificador de cliente, su contraseña y el nombre de su aplicación. El nombre será útil más adelante si desea permitir que otras personas lo usen. diff --git a/pages/manage_and_operate/api/first-steps/guide.fr-ca.md b/pages/manage_and_operate/api/first-steps/guide.fr-ca.md index 78b3d7a8a96..c2ad96c1e3d 100644 --- a/pages/manage_and_operate/api/first-steps/guide.fr-ca.md +++ b/pages/manage_and_operate/api/first-steps/guide.fr-ca.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objectif -Les API disponibles sur [https://ca.api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. +Les API disponibles sur [https://ca.api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. **Découvrez comment utiliser les API OVHcloud mais aussi comment les coupler avec vos applications** ## Prérequis - Disposer d'un compte OVHcloud actif et connaître ses identifiants. -- Être sur la page web des [API OVHcloud](/links/api){.external}. +- Être sur la page web des [API OVHcloud](/links/api). ## En pratique @@ -128,7 +128,7 @@ Les onglets `PHP` et `Python` contiennent les éléments à ajouter dans votre s Toute application souhaitant communiquer avec l'API OVHcloud doit être déclarée à l'avance. -Pour ce faire, cliquez sur le lien suivant : [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/){.external}. +Pour ce faire, cliquez sur le lien suivant : [https://ca.api.ovh.com/createToken/](https://ca.api.ovh.com/createToken/). Renseignez votre identifiant client, votre mot de passe et le nom de votre application. Le nom sera utile plus tard si vous voulez autoriser d'autres personnes à l'utiliser. diff --git a/pages/manage_and_operate/api/first-steps/guide.fr-fr.md b/pages/manage_and_operate/api/first-steps/guide.fr-fr.md index d999b3105b2..0d33034f907 100644 --- a/pages/manage_and_operate/api/first-steps/guide.fr-fr.md +++ b/pages/manage_and_operate/api/first-steps/guide.fr-fr.md @@ -6,14 +6,14 @@ updated: 2025-05-13 ## Objectif -Les API disponibles sur [https://api.ovh.com/](/links/api){.external} vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. +Les API disponibles sur [https://api.ovh.com/](/links/api) vous permettent d'acheter, gérer, mettre à jour et configurer des produits OVHcloud sans utiliser une interface graphique comme l'espace client. **Découvrez comment utiliser les API OVHcloud mais aussi comment les coupler avec vos applications** ## Prérequis - Disposer d'un compte OVHcloud actif et connaître ses identifiants. -- Être sur la page web des [API OVHcloud](/links/api){.external}. +- Être sur la page web des [API OVHcloud](/links/api). ## En pratique @@ -128,7 +128,7 @@ Les onglets `PHP` et `Python` contiennent les éléments à ajouter dans votre s Toute application souhaitant communiquer avec l'API OVHcloud doit être déclarée à l'avance. -Pour ce faire, cliquez sur le lien suivant : [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +Pour ce faire, cliquez sur le lien suivant : [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Renseignez votre identifiant client, votre mot de passe et le nom de votre application. Le nom sera utile plus tard si vous voulez autoriser d'autres personnes à l'utiliser. diff --git a/pages/manage_and_operate/api/first-steps/guide.it-it.md b/pages/manage_and_operate/api/first-steps/guide.it-it.md index a0fe1365197..24a2412944e 100644 --- a/pages/manage_and_operate/api/first-steps/guide.it-it.md +++ b/pages/manage_and_operate/api/first-steps/guide.it-it.md @@ -10,14 +10,14 @@ updated: 2025-05-13 ## Obiettivo -Le API disponibili su [https://api.ovh.com/](/links/api){.external} ti permettono di acquistare, gestire, aggiornare e configurare prodotti OVHcloud senza utilizzare un'interfaccia grafica come lo Spazio Cliente. +Le API disponibili su [https://api.ovh.com/](/links/api) ti permettono di acquistare, gestire, aggiornare e configurare prodotti OVHcloud senza utilizzare un'interfaccia grafica come lo Spazio Cliente. **Scopri come utilizzare le API OVHcloud e come associarle alle tue applicazioni** ## Prerequisiti - Disporre di un account OVHcloud attivo e conoscere le proprie credenziali -- Essere sulla pagina Web delle [API OVHcloud](/links/api){.external}. +- Essere sulla pagina Web delle [API OVHcloud](/links/api). ## Procedura @@ -132,7 +132,7 @@ Le schede `PHP` e `Python` contengono gli elementi da aggiungere al tuo script i Qualsiasi applicazione che desideri comunicare con l'API OVHcloud deve essere dichiarata in anticipo. -Clicca su questo link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +Clicca su questo link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Inserisci il tuo identificativo cliente, la password e il nome della tua applicazione. Il nome sarà utile più tardi se volete autorizzare altre persone a usarlo. diff --git a/pages/manage_and_operate/api/first-steps/guide.pl-pl.md b/pages/manage_and_operate/api/first-steps/guide.pl-pl.md index 32edc512298..a594673af4f 100644 --- a/pages/manage_and_operate/api/first-steps/guide.pl-pl.md +++ b/pages/manage_and_operate/api/first-steps/guide.pl-pl.md @@ -10,14 +10,14 @@ updated: 2025-05-13 ## Wprowadzenie -API dostępne na stronie [https://api.ovh.com/](/links/api){.external} pozwalają na zakup, zarządzanie i konfigurowanie produktów OVHcloud bez konieczności korzystania z interfejsu graficznego, takiego jak Panel klienta. +API dostępne na stronie [https://api.ovh.com/](/links/api) pozwalają na zakup, zarządzanie i konfigurowanie produktów OVHcloud bez konieczności korzystania z interfejsu graficznego, takiego jak Panel klienta. **Dowiedz się, jak korzystać z API OVHcloud oraz jak je łączyć z Twoimi aplikacjami** ## Wymagania początkowe - Posiadanie aktywnego konta OVHcloud i znanie jego identyfikatorów -- Bycie na stronie WWW [API OVHcloud](/links/api){.external}. +- Bycie na stronie WWW [API OVHcloud](/links/api). ## W praktyce @@ -132,7 +132,7 @@ Zakładki `PHP` i `Python` zawierają elementy, które należy dodać do skryptu Każda aplikacja, która chce komunikować się z API OVHcloud, musi zostać zgłoszona z wyprzedzeniem. -W tym celu kliknij link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +W tym celu kliknij link: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Wpisz identyfikator klienta, hasło i nazwę aplikacji. Nazwa będzie pomocna później, jeśli chcesz zezwolić innym na jej używanie. diff --git a/pages/manage_and_operate/api/first-steps/guide.pt-pt.md b/pages/manage_and_operate/api/first-steps/guide.pt-pt.md index 91bfed784e6..16102d438a6 100644 --- a/pages/manage_and_operate/api/first-steps/guide.pt-pt.md +++ b/pages/manage_and_operate/api/first-steps/guide.pt-pt.md @@ -10,14 +10,14 @@ updated: 2025-05-13 ## Objetivo -As API disponíveis em [https://api.ovh.com/](/links/api){.external} permitem-lhe adquirir, gerir, atualizar e configurar produtos OVHcloud sem utilizar uma interface gráfica como a Área de Cliente. +As API disponíveis em [https://api.ovh.com/](/links/api) permitem-lhe adquirir, gerir, atualizar e configurar produtos OVHcloud sem utilizar uma interface gráfica como a Área de Cliente. **Saiba como utilizar as API da OVHcloud e como associá-las às suas aplicações** ## Requisitos - Ter uma conta OVHcloud ativa e conhecer os seus identificadores. -- Estar na página web das [API OVHcloud](/links/api){.external}. +- Estar na página web das [API OVHcloud](/links/api). ## Instruções @@ -132,7 +132,7 @@ Os separadores `PHP` e `Python` contêm os elementos que devem ser adicionados n Qualquer aplicação que pretenda comunicar com a API da OVHcloud deve ser declarada previamente. -Para isso, clique na seguinte ligação: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/){.external}. +Para isso, clique na seguinte ligação: [https://eu.api.ovh.com/createToken/](https://eu.api.ovh.com/createToken/). Indique o seu ID de cliente, a sua palavra-passe e o nome da sua aplicação. O nome será útil mais tarde se quiser autorizar outras pessoas a utilizá-lo. diff --git a/pages/manage_and_operate/api/services/guide.en-asia.md b/pages/manage_and_operate/api/services/guide.en-asia.md index 0049144448f..3ad72ba0918 100644 --- a/pages/manage_and_operate/api/services/guide.en-asia.md +++ b/pages/manage_and_operate/api/services/guide.en-asia.md @@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions diff --git a/pages/manage_and_operate/api/services/guide.en-au.md b/pages/manage_and_operate/api/services/guide.en-au.md index 0049144448f..3ad72ba0918 100644 --- a/pages/manage_and_operate/api/services/guide.en-au.md +++ b/pages/manage_and_operate/api/services/guide.en-au.md @@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions diff --git a/pages/manage_and_operate/api/services/guide.en-ca.md b/pages/manage_and_operate/api/services/guide.en-ca.md index 0049144448f..3ad72ba0918 100644 --- a/pages/manage_and_operate/api/services/guide.en-ca.md +++ b/pages/manage_and_operate/api/services/guide.en-ca.md @@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions diff --git a/pages/manage_and_operate/api/services/guide.en-gb.md b/pages/manage_and_operate/api/services/guide.en-gb.md index 0049144448f..3ad72ba0918 100644 --- a/pages/manage_and_operate/api/services/guide.en-gb.md +++ b/pages/manage_and_operate/api/services/guide.en-gb.md @@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions diff --git a/pages/manage_and_operate/api/services/guide.en-ie.md b/pages/manage_and_operate/api/services/guide.en-ie.md index 0049144448f..3ad72ba0918 100644 --- a/pages/manage_and_operate/api/services/guide.en-ie.md +++ b/pages/manage_and_operate/api/services/guide.en-ie.md @@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions diff --git a/pages/manage_and_operate/api/services/guide.en-sg.md b/pages/manage_and_operate/api/services/guide.en-sg.md index 0049144448f..3ad72ba0918 100644 --- a/pages/manage_and_operate/api/services/guide.en-sg.md +++ b/pages/manage_and_operate/api/services/guide.en-sg.md @@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions diff --git a/pages/manage_and_operate/api/services/guide.en-us.md b/pages/manage_and_operate/api/services/guide.en-us.md index 0049144448f..3ad72ba0918 100644 --- a/pages/manage_and_operate/api/services/guide.en-us.md +++ b/pages/manage_and_operate/api/services/guide.en-us.md @@ -17,8 +17,8 @@ The **/service** API route consists of actions common to all types of services a ## Requirements -* Being connected on [OVHcloud API](/links/api){.external}. -* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps){.external}. +* Being connected on [OVHcloud API](/links/api). +* Having [created your credentials for OVHcloud API](/pages/manage_and_operate/api/first-steps). * Having a customer account wih Reseller Tag (contact your sales representative for eligibility if applicable). ## Instructions diff --git a/pages/manage_and_operate/api/services/guide.fr-ca.md b/pages/manage_and_operate/api/services/guide.fr-ca.md index 9f7fae9a4bc..dfdbab3741e 100644 --- a/pages/manage_and_operate/api/services/guide.fr-ca.md +++ b/pages/manage_and_operate/api/services/guide.fr-ca.md @@ -18,8 +18,8 @@ La route d'API **/service** regroupe les actions communes à tous types de servi ## Prérequis -* Être connecté aux [API OVHcloud](/links/api){.external}. -* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}. +* Être connecté aux [API OVHcloud](/links/api). +* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps). * Avoir un compte client avec un tag Reseller (contactez votre commercial pour connaître votre éligibilité le cas échéant). ## En pratique diff --git a/pages/manage_and_operate/api/services/guide.fr-fr.md b/pages/manage_and_operate/api/services/guide.fr-fr.md index 9f7fae9a4bc..dfdbab3741e 100644 --- a/pages/manage_and_operate/api/services/guide.fr-fr.md +++ b/pages/manage_and_operate/api/services/guide.fr-fr.md @@ -18,8 +18,8 @@ La route d'API **/service** regroupe les actions communes à tous types de servi ## Prérequis -* Être connecté aux [API OVHcloud](/links/api){.external}. -* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps){.external}. +* Être connecté aux [API OVHcloud](/links/api). +* Avoir [créé ses identifiants pour l'API OVHcloud](/pages/manage_and_operate/api/first-steps). * Avoir un compte client avec un tag Reseller (contactez votre commercial pour connaître votre éligibilité le cas échéant). ## En pratique diff --git a/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md b/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md index b47dfd89927..d7e89770cae 100644 --- a/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md +++ b/pages/manage_and_operate/kms/kms-troubleshooting/guide.en-gb.md @@ -105,7 +105,7 @@ Elements that can be pushed to Logs Data Platform: |iam_operation|IAM action evalutated| |iam_identities|IAM identity used for rights evaluation| |kmip_operation|KMIP operation used| -|kmip_reason|[Standard KMIP error code](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D){.external}| +|kmip_reason|[Standard KMIP error code](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D)| ## Go further diff --git a/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md b/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md index 23a0b7889a9..c09149b69bc 100644 --- a/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md +++ b/pages/manage_and_operate/kms/kms-troubleshooting/guide.fr-fr.md @@ -106,7 +106,7 @@ Les éléments pouvant être transmis à Logs Data Platform étant : |iam_operation|Action IAM évaluée| |iam_identities|Identitée IAM utilisé pour l'évaluation des droits| |kmip_operation|Opération KMIP utilisée| -|kmip_reason|[code d'erreur KMIP](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D){.external}| +|kmip_reason|[code d'erreur KMIP](https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.pdf#%5B%7B%22num%22%3A484%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D)| ## Aller plus loin diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.de-de.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-asia.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-au.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ca.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-gb.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-ie.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-sg.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.en-us.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-es.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.es-us.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-ca.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.fr-fr.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.it-it.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pl-pl.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md index 27f066c7a61..b0e95d455ef 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_elastalert/guide.pt-pt.md @@ -6,13 +6,13 @@ updated: 2024-08-07 ## Objective -[ElastAlert 2](https://github.com/jertel/elastalert){.external} is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. +[ElastAlert 2](https://github.com/jertel/elastalert) is an alerting framework originally designed by Yelp. It is able to detect anomalies, spikes, or other patterns of interest. It is production-ready and is a well known standard of alerting in the Elasticsearch/OpenSearch ecosystem. Their motto is: "If you can see it in your dashboards, ElastAlert 2 can alert on it." In this document you will learn how to deploy this component on Logs Data Platform thanks to its compatibility with OpenSearch through [aliases](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) and [indexes](/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input). Logs Data Platform also allows you to host ElastAlert meta-indices on Logs Data Platform. ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - A machine on which you will deploy ElastAlert. - Some data on an alias or an index. @@ -42,7 +42,7 @@ ElastAlert configuration consists of three steps: ### Installation -Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert){.external}. You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. +Installing ElastAlert can be done in different ways as described in their [documentation](https://elastalert2.readthedocs.io/en/latest/elastalert.html#running-elastalert). You can either use the docker image or install the python 3 packages. You must check that your Python version is the one compatible with ElastAlert. Check the documentation to verify which version of Python is compatible. Be sure also to meet all the [requirements](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements) before attempting the installation. You can either install the latest released version of ElastAlert 2 using pip: @@ -131,7 +131,7 @@ alert_time_limit: days: 2 ``` -You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring){.external}. +You can find all the available options [here](https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#downloading-and-configuring). - **rules_folder** is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules in this folder, ElastAlert will not start. - **run_every** is how often ElastAlert will query OpenSearch. @@ -147,7 +147,7 @@ You can find all the available options [here](https://elastalert2.readthedocs.io ### Rules configuration -In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency){.external} rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. +In this example, we will create a [frequency.yml](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#frequency) rule which will send a email if the field **user** with the value **Oles** appears more than **3** times in less than **4 hours** and use the debug logger **debug**. ```yaml name: Example frequency rule @@ -231,11 +231,11 @@ INFO:elastalert:Example frequency rule ``` -ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts){.external}. +ElastAlert has a lot of integrations for alerting including Email, JIRA, OpsGenie, SNS, HipChat, Slack, MS Teams, PagerDuty, Zabbix, custom commands and [many more](https://elastalert2.readthedocs.io/en/latest/ruletypes.html#alerts). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.de-de.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-asia.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-au.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ca.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-gb.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-ie.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-sg.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.en-us.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-es.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.es-us.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-ca.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.fr-fr.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.it-it.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pl-pl.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md index 684d13b115f..7eddef38346 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/alerting_stream/guide.pt-pt.md @@ -45,7 +45,7 @@ For this tutorial, we will configure the 3 alerts that we can use for a website. #### Apache Server Configuration -We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host){.external} to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: +We will use the [Filebeat Apache format](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-apache.html#_virtual_host) to send logs, this format allows the filebeat module to parse the relevant information. Here is a configuration file sample: ```ApacheConf @@ -108,7 +108,7 @@ Fill the value of **/etc/ssl/certs/ldp.pem** with the "Data-gathering tools" cer ![SSL input](images/ssl_input.png){.thumbnail} -Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host){.external} by running: +Ensure to enable [Apache support on Filebeat](https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-apache.html#_virtual_host) by running: ```shell-session $ ldp@ubuntu:~$ sudo filebeat modules enable apache @@ -201,5 +201,5 @@ You will then receive an email with the messages included. You can then directly - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.de-de.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-asia.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-au.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ca.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-gb.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-ie.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-sg.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.en-us.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-es.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.es-us.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-ca.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.fr-fr.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.it-it.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pl-pl.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md index 0435e773152..148e27ce7b1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage/guide.pt-pt.md @@ -20,7 +20,7 @@ As implied in the title, you will need a stream. If you don't know what a stream On this page you will find the long-term storage toggle. Once enabled, you will be able to choose different options: -- The compression algorithm. We currently support [GZIP](http://www.gzip.org/){.external}, [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html){.external}, [Zstandard](https://facebook.github.io/zstd/){.external} or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html){.external}. +- The compression algorithm. We currently support [GZIP](http://www.gzip.org/), [DEFLATE (AKA zip)](http://www.zlib.net/feldspar.html), [Zstandard](https://facebook.github.io/zstd/) or [LZMA (used by 7-Zip)](http://www.7-zip.org/7z.html). - The retention duration of your archives (from one year to ten years). - The content of your archives: GELF, one special field [X-OVH-TO-FREEZE](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention), or both (you will get two separate archives in this case) - The activation of the notification for each new archive available. @@ -54,7 +54,7 @@ On each archive you can use the `Download`{.action} action to directly download #### Using the API -If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com){.external}. You can try all these steps with the [OVHcloud API Console](/links/api){.external}. +If you want to download your logs using the API (to use them in a Big Data analysis platform for example), you can do all these steps by using the OVHcloud api available at [https://api.ovh.com](https://api.ovh.com). You can try all these steps with the [OVHcloud API Console](/links/api). You will need your OVHcloud service name associated with your account. Your service name is the login logs-xxxxx that is displayed in the left of the OVHcloud Manager. @@ -122,11 +122,11 @@ It will take some time (depending on the size of your archive file) for your arc #### Using ldp-archive-mirror To allow you to get a local copy of all your cold stored archives on Logs Data Platform, we have developed an open source tool that will do this passively: **ldp-archive-mirror** -The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror){.external} +The installation and configuration procedure is described on the related [github page](https://github.com/ovh/ldp-archive-mirror) #### Content of the archive -The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. +The data you retrieve in the archive is by default in [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). It is ordered by the field timestamp and retains all additional fields that you would have added (with your [Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) for example). Since this format is fully compatible with JSON, you can use it right away in any other system. ```json {"_facility":"gelf-rb","_id":11,"_monitoring":"cb1068c485e738655cfe10df5df3a9a185aa8e301b5c8d0747b3502e8fdcc157","_type":"direct","full_message":"monitoring message (11) at 2017-05-17 09:58:08 +0000","host":"shinken","level":1,"short_message":"monitoring msg (11)","timestamp":1.4950150886486998E9} @@ -146,5 +146,5 @@ Remember, that you can also use a special field X-OVH-TO-FREEZE on your logs to - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.de-de.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-asia.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-au.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ca.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-gb.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-ie.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-sg.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.en-us.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-es.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.es-us.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-ca.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.fr-fr.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.it-it.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pl-pl.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md index 472461ecae8..14c9a522e43 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage_encryption/guide.pt-pt.md @@ -284,5 +284,5 @@ The Logs Data Platform team will then take care of your request. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.de-de.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-asia.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-au.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ca.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-gb.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-ie.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-sg.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.en-us.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-es.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.es-us.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-ca.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.fr-fr.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.it-it.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pl-pl.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md index 381d88a01d0..fa3049cbe45 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_bonfire/guide.pt-pt.md @@ -5,7 +5,7 @@ updated: 2024-08-07 ## Objective -[Bonfire](https://github.com/blue-yonder/bonfire){.external} is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. +[Bonfire](https://github.com/blue-yonder/bonfire) is an open source command line interface to query Graylog searches via the REST API. It emulates the experience of using tail on a local file and adds other valuable options. This guide will help you to query your logs from the Bonfire command line tool. @@ -160,5 +160,5 @@ Enter password for @.logs.ovh.com:443/api: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.de-de.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-asia.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-au.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ca.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-gb.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-ie.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-sg.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.en-us.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-es.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.es-us.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-ca.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.fr-fr.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.it-it.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pl-pl.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md index d25f909f0ad..6a44653aed3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail/guide.pt-pt.md @@ -20,7 +20,7 @@ The Logs Data Platform allows you to connect different applications or servers t ### Download and test ldp-tail in two minutes -**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases){.external} to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. +**ldp-tail** is derived from an internal tool used by OVHcloud engineers to follow in real time hundreds of applications and servers logs. It is written in Go and is completely open-source. So if you're curious enough, you can check the code at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can also download binary releases from this website. Go to [https://github.com/ovh/ldp-tail/releases](https://github.com/ovh/ldp-tail/releases) to download the release for your platform. 64-bit versions of Linux, Windows and Mac OS X are currently supported. Decompress the archive obtained and you will get the **ldp-tail** binary. You can test it right away on our demo stream by using this command in a terminal. @@ -49,7 +49,7 @@ You will also find on this page a link to the ldp-tail release page and three wa ### Formatting and Filtering -**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters){.external}. Here are the two main options that you can use to enhance your output. +**ldp-tail** is not just a plain tail (as its name suggest). It comes with advanced formatting and filtering capabilities. The full documentation of these capabilities are all available at the [github website](https://github.com/ovh/ldp-tail#parameters). Here are the two main options that you can use to enhance your output. #### The pattern option @@ -64,7 +64,7 @@ My Title: Success , The Joke: Success is relative. The more success, the more re My Title: Freeway , The Joke: When everything is coming your way, you're on the wrong side of the freeway. ``` -Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external} field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. +Please note that in this example we use the [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification) field naming convention of, which means that your extra fields must all have an underscore. This is because the WebSocket endpoint sends messages fully compatible with the GELF format so you can use them after in any GELF compatible tool. The pattern option allows you also to customize colors, background and text colors are customizable. @@ -82,7 +82,7 @@ $ ldp@ubuntu:~$ ./ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo" --p #### The match option -As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail){.external}. You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. +As the name implies, the match option is able to choose which messages you want or don't want to display in your ldp-tail. The option contains several operators, all described at [https://github.com/ovh/ldp-tail](https://github.com/ovh/ldp-tail). You can easily display messages beginning with some values or display only message that have a certain field or whose a field is higher or lower than a value. Here is how you can display only logs that have a title beginning with the word "another" @@ -94,7 +94,7 @@ You can of course combine multiple matches by issuing **ldp-tail --match ` and `` values with UNIX tim $ ldp@ubuntu:~$ ldp-tail --address "wss://gra1.logs.ovh.com/tail/?tk=demo&begin=1722841200&end=1722848400" --pattern "{{date .timestamp}}: {{ ._category }}" ``` -You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/){.external} to easily convert dates to unix timestamps. +You can use the website [https://www.unixtimestamp.com/](https://www.unixtimestamp.com/) to easily convert dates to unix timestamps. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.de-de.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-asia.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-au.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ca.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-gb.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-ie.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-sg.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.en-us.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-es.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.es-us.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-ca.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.fr-fr.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.it-it.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pl-pl.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md index 97b9fb19a76..3753a511da9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention/guide.pt-pt.md @@ -17,7 +17,7 @@ Now that you can send logs, you may be wondering how to tell Logs Data Platform ### What is a valid log for Logs Data Platform? -Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. +Each log received on Logs Data Platform is transformed into a [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification)-formatted log. What is GELF? A standardized JSON way to send logs. GELF stands for Graylog Extended Log Format. Using this format gives us two advantages: It is directly compatible with Graylog and it is still extensible enough to enrich your logs as you would like to. This format impose a few conventions that if you don't follow can have many consequences: @@ -107,7 +107,7 @@ will become: } ``` -So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +So this is everything you need to know to send valid messages format and not shoot yourself in the foot. If you have any question you can always reach us [on the community hub](https://community.ovh.com/en/c/Platform/data-platforms). Happy Logging @@ -115,5 +115,5 @@ Happy Logging - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.de-de.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-asia.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-au.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ca.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-gb.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-ie.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-sg.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.en-us.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-es.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.es-us.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-ca.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.fr-fr.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.it-it.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pl-pl.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md index 64a210e54bc..094e94436fc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start/guide.pt-pt.md @@ -11,7 +11,7 @@ Welcome to the quick start tutorial of the Logs Data Platform. This Quick start ### Welcome to Logs Data Platform -First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs){.external}. Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. +First, you will have to create a new account on [the Logs Data Platform page](https://www.ovh.com/fr/data-platforms/logs). Creating an account is totally free. With the pay-as-you-go pricing model of Logs Data Platform you pay only for what you use. - Log in to the [OVHcloud Control Panel](/links/manager), and navigate to the Bare Metal Cloud section located at the top left in the header. - Once you have created your credentials, the main interface will appear: @@ -64,12 +64,12 @@ The menu **"..."** at the right gives you several features: Logs Data Platform supports several logs formats, each one of them has its own advantages and disadvantages. Here are the different formats available -- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. +- **GELF**: This is the native format of logs used by Graylog. This JSON format will allow you to send logs really easily. See: [https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and is still human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. - **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found at this link: [RFC -5424](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +5424](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). Here are the ports you can use on your cluster to send your logs. You can either use the secured ones with SSL Enabled (TLS >= 1.2) or use the plain unsecured ones if you can't use an SSL transport. @@ -83,7 +83,7 @@ As said before, you can retrieve the ports and the address of your cluster at th ![About page](images/about.png){.thumbnail} -To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time){.external}, RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339){.external}. Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. +To send your logs to Logs Data Platform you can easily test your stream by doing, for example, a simple `echo` followed by an `openssl` command. Here are 3 examples, choose the format you like the most with your preferred terminal: Note that each format has its own timestamp format: GELF uses [seconds from epoch](https://en.wikipedia.org/wiki/Unix_time), RFC 5424 and LTSV use the [RFC 3339](https://tools.ietf.org/html/rfc3339). Don't forget to change the **timestamp** to your current time to see your logs (by default Graylog only display recent logs, you can change the scope of the search by using the top left time picker in the Graylog web interface). Also please ensure to change the **token** to put the right one too. *GELF*: @@ -141,7 +141,7 @@ helps going Giving you all the messages that contains the terms `helps` and `going`. -Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html){.external}. +Graylog allows you to extensively search through your logs without compromising usability. For more information about how to craft relevant searches on Graylog, please visit [Graylog Search Documentation](https://go2docs.graylog.org/4-x/making_sense_of_your_log_data/writing_search_queries.html). Send several logs with different values for `user_id`, for example. At the left of the page you will see the fields present in your stream, you can click on the `user_id` checkbox to display all the values for this field along the logs. @@ -185,7 +185,7 @@ We have only scratched the surface of what Logs Data Platform can do for you. yo - [Configure your syslog-ng](/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng) to send your Linux logs to Logs Data Platform. - [Using roles](/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission) to allow other users of the platform to let them see yours beautiful Dashboards or let them dig in your Streams instead of doing it for them. - [Using OpenSearch Dashboards and aliases to unleash the power of OpenSearch](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards) -- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries){.external} +- If you want to master Graylog, this is the place to go: [Graylog documentation](https://docs.graylog.org/docs/queries) - Documentation: [Guides](/products/observability-logs-data-platform) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) -- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Join our community of users on [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md index 33e71ebfaa9..49ff3207f39 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.de-de.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-asia.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-au.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ca.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md index 4212aa05b50..1323ddd4f96 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-gb.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) \ No newline at end of file diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-ie.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-sg.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.en-us.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-es.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.es-us.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-ca.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.fr-fr.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.it-it.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pl-pl.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md index 4258ccc0425..dea646ab332 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_responsibility_model/guide.pt-pt.md @@ -45,7 +45,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | | Offer standard solutions and protocols for importing and exporting data using API for logs and dashboards | I | RA | -| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} for data export and local analysis | RA | | +| Decide to use [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) for data export and local analysis | RA | | #### 2.3. Customer Information System setup @@ -164,7 +164,7 @@ For your information, a **Log forwarder agent** is considered as a tool (full so | **Activity** | **Customer** | **OVHcloud** | | --- | --- | --- | -| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror){.external} | RA | | +| Manage reversibility operations : manual extract, using API, [ldp-archive-mirror](https://github.com/ovh/ldp-archive-mirror) | RA | | | Migrate/transfer data | RA | | ### 5. End of service @@ -189,5 +189,5 @@ For your information, a **Log forwarder agent** is considered as a tool (full so - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.de-de.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-asia.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-au.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ca.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-gb.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-ie.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-sg.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.en-us.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-es.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.es-us.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md index c77eeec11d5..2ad2f2ba722 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-ca.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few exemples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few exemples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md index c77eeec11d5..2ad2f2ba722 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.fr-fr.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few exemples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few exemples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.it-it.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pl-pl.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md index 5b0384f4950..26d4de7a02e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/getting_started_roles_permission/guide.pt-pt.md @@ -7,7 +7,7 @@ updated: 2022-07-28 ## Overview Logs policies are often decisions made by an entire team, not individuals. Collaboration remains an utmost priority for Logs Data Platform, following this strategy it shall enable everyone to share data in a easy and secure manner. -Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control){.external} to users to configure access rights. This document will expose you how you can use this system to configure access rights. +Log policies also affect several teams regarding access rights, for instance the Product managers can access some data but be denied to access security logs. That's why we decided to provide a [Role Based Access Control](https://en.wikipedia.org/wiki/Role-based_access_control) to users to configure access rights. This document will expose you how you can use this system to configure access rights. ## Creating a Role @@ -61,7 +61,7 @@ A user can use their usual Logs Data Platform account credentials on a different ## Using API -Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs){.external}. +Role management can be automated by using the [OVHcloud API](https://api.ovh.com/console/#/dbaas/logs). Here are a few examples of the role API calls you can use: @@ -124,11 +124,11 @@ Here are a few examples of the role API calls you can use: - `RolePermissionAliasCreation`: A JSON object containing the field {aliasId} (string), the UUID of the alias you want to share. -Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs){.external}, and try it with the provided console. +Don't hesitate to [explore the API](https://api.ovh.com/console/#/dbaas/logs), and try it with the provided console. ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.de-de.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-asia.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-au.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ca.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-gb.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-ie.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-sg.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.en-us.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-es.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.es-us.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-ca.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.fr-fr.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.it-it.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pl-pl.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md index c5cd07bbe0d..391208f34be 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_apache/guide.pt-pt.md @@ -19,15 +19,15 @@ This line already gives a lot of information but it can be difficult to extract This guide will present you with three non-intrusive ways to send logs to the Logs Data platform: - ask Apache to pipe log entries directly to the platform. -- use [syslog-ng](https://syslog-ng.org/){.external} to parse and send all of your logs -- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat){.external} with apache module +- use [syslog-ng](https://syslog-ng.org/) to parse and send all of your logs +- setup [filebeat](https://www.elastic.co/fr/products/beats/filebeat) with apache module ## Requirements In order to follow this guide you will need: - The openssl package: as we are using it to send the logs securely. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -169,7 +169,7 @@ If you want to use your own log format and include some useful information here |runtime_num|Execution time for processing some request, e.g. X-Runtime header for application server or processing time of SQL for DB server.|`%{X-Runtime}o`|$upstream_http_x_runtime| |apptime_num|Response time from the upstream server|-|$upstream_response_time| -The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html){.external} +The full list of logs formats that can be used in Apache are described here [mod_log_config.html](http://httpd.apache.org/docs/current/en/mod/mod_log_config.html) ### Using Filebeat @@ -180,5 +180,5 @@ The complete procedure of its installation is described [on this page](/pages/ma - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.de-de.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-asia.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-au.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ca.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md index 24e4c01de66..73bf5afee54 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-gb.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are its resilient protocol to send logs, and a variety of ready-to-use modules for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-ie.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-sg.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.en-us.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-es.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.es-us.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-ca.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.fr-fr.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.it-it.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pl-pl.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md index 8fc5d85ad78..0c10b0739cc 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat/guide.pt-pt.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat){.external} is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. +[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is an open source file harvester, used to fetch logs files and can be easily setup to feed them into Logs Data Platform. The main benefits of Filebeat are it's resilient protocol to send logs, and a variety of modules ready-to-use for most of the common applications. @@ -15,18 +15,18 @@ This guide will describe how to setup Filebeat OSS on your system for forwarding Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions ### Setup Filebeat OSS 7.X in your system -Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat){.external} +Filebeat supports many platforms as listed here [https://www.elastic.co/downloads/beats/filebeat](https://www.elastic.co/downloads/beats/filebeat) -You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/){.external} to compile it) or just download the binary to start immediately. +You can decide to setup Filebeat OSS from a package or to compile it from source (you will need the latest [go compiler](https://golang.org/) to compile it) or just download the binary to start immediately. -For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss){.external} to download the best version for your distribution. +For this part, head to [Filebeat OSS download website](https://www.elastic.co/fr/downloads/past-releases#filebeat-oss) to download the best version for your distribution. The following configuration files have been tested on the latest version of Filebeat OSS compatible with OpenSearch (**7.12.1**). @@ -34,11 +34,11 @@ The package will install the config file in the following directory: `/etc/fileb > [!warning] > Do not use a version superior than the 7.12 version. They are currently not compatible with OpenSearch. -> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats){.external}. +> More information in the [matrix compatibility documentation](https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/#compatibility-matrix-for-beats). ### Configure Filebeat OSS 7.X on your system -In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html){.external}. +In the following example we will enable Apache and Syslog support, but you can easily prospect [anything else](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-modules.html). Filebeat expects a configuration file named **filebeat.yml** . @@ -216,7 +216,7 @@ output.elasticsearch: This configuration deactivates the template configuration (unneeded for our endpoint). You need to provide your credentials **** and **** of your account. Like all Logs Data Platform APIs you can also use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens). Don't change **ldp-logs** since it is our special destination index. -When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html){.external} to parse and structure the logs. +When you use our OpenSearch endpoint with filebeat, it will use the [ingest module](https://www.elastic.co/guide/en/logstash/7.12/use-ingest-pipelines.html) to parse and structure the logs. #### Enable Apache Filebeat module @@ -310,13 +310,13 @@ Note the type value (apache or syslog or apache-error) that indicates the source Filebeat is a handy tool to send the content of your current log files to Logs Data Platform. It offers a clean and easy way to send your logs without changing the configuration of your software. Don't hesitate to check the links below to master this tool. -- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html){.external} -- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html){.external} +- Configuration's details: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html) +- Getting started: [https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html) - Learn how to configure Filebeat and Logstash to add your own extra filters: [Dedicated input - Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) ## Going further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.de-de.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-asia.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-au.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ca.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md index b276ae04ce3..f9e5b0d2029 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-gb.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods' logs in one place and analyze them easily? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods' logs in one place and analyze them easily? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-ie.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-sg.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.en-us.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-es.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.es-us.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-ca.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.fr-fr.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.it-it.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pl-pl.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md index 38412b2e980..d633fb3cf00 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit/guide.pt-pt.md @@ -8,13 +8,13 @@ updated: 2024-07-18 In this tutorial, you will learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform. -[Kubernetes](https://kubernetes.io/){.external} is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/){.external} ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes){.external}. +[Kubernetes](https://kubernetes.io/) is the de facto standard to manage containerized applications on cloud platforms. It is open source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods logs in one place and analyze them easily ? By using Logs Data Platform with the help of Fluent Bit. [Fluent Bit](https://fluentbit.io/) is a fast and lightweight log processor and forwarder. It is open source, cloud oriented and a part of the [Fluentd](https://fluentd.org/) ecosystem. This tutorial will help you to configure it for Logs Data Platform, you can of course apply it to our [fully managed Kubernetes offer](/links/public-cloud/kubernetes). ## Requirements Note that in order to complete this tutorial, you should have at least: -- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.co.uk/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [Created at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - A working kubernetes cluster with some pods already logging to stdout. - 15 minutes. @@ -163,5 +163,5 @@ And that's it. Your kubernetes activity is now perfectly logged in one place. Ha - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.de-de.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-asia.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-au.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ca.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-gb.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-ie.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-sg.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.en-us.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-es.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.es-us.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-ca.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.fr-fr.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.it-it.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pl-pl.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md index 23543a960ce..35e4585183a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input/guide.pt-pt.md @@ -5,7 +5,7 @@ updated: 2025-04-25 ## Objective -[Logstash](https://github.com/elastic/logstash){.external} is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash){.external}. +[Logstash](https://github.com/elastic/logstash) is an open source software developed by Elastic. Based on its features, it is possible to send messages from several inputs to different types of output using a variety of codecs, while processing them and transforming them in the process. You can learn a lot more about it on [the official website](https://www.elastic.co/products/logstash). This guide will demonstrate how to deploy a personalized Logstash having a specific configuration, and send logs from any source to your stream directly on the Logs Data Platform. @@ -212,7 +212,7 @@ This is an address of your collector for the cluster on Logs Data Platform. Send The version hosted by Logs Data Platform is the Latest Logstash 7 version (7.8 as of July 2020). Of course we will update to the new versions as soon as they become available. #### Logstash Plugins -For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +For your information, here is the list of Logstash plugins we support. Of course we will welcome any suggestion on additional plugins. Don't hesitate to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ##### Inputs plugins @@ -359,11 +359,11 @@ To do this, please go to the dedicated page by clicking on the `Console output`{ Here are some links to help you go further with Logstash -- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html){.external} -- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html){.external} +- [Logstash official documentation](https://www.elastic.co/guide/en/logstash/current/index.html) +- [Grok filters documentation](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) - [Logstash + Groks + Filebeat = Awesome](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat) -- [Grok Constructor](http://grokconstructor.appspot.com/do/match){.external} -- [A Ruby regular expression editor](https://rubular.com/){.external} +- [Grok Constructor](http://grokconstructor.appspot.com/do/match) +- [A Ruby regular expression editor](https://rubular.com/) That's all you need to know about the Logstash Collector on Logs Data Platform. @@ -371,5 +371,5 @@ That's all you need to know about the Logstash Collector on Logs Data Platform. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.de-de.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-asia.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-au.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ca.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md index 21a816cee78..94b35495c8a 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-gb.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-ie.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-sg.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.en-us.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-es.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.es-us.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-ca.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.fr-fr.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.it-it.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pl-pl.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md index 6942b7c5d82..b035abbd503 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_mutualized_inputs/guide.pt-pt.md @@ -27,11 +27,11 @@ Logs Data Platform imposes a few [constraints](/pages/manage_and_operate/observa The log formats that are accepted by Logs Data Platform are the following: -- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}. The GELF input only accepts a null (`\0`) delimiter. -- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org){.external}. LTSV has two inputs that accept a line delimiter or a null delimiter. -- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424){.external}. -- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/){.external}. -- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat){.external}, [Winlogbeat](https://www.elastic.co/beats/winlogbeat){.external}). +- **GELF**: This is the native format of logs used by Graylog. This JSON format allows you to send logs really easily. See the [GELF Payload Specification](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification). The GELF input only accepts a null (`\0`) delimiter. +- **LTSV**: This simple format is very efficient and human readable. You can learn more about it [here](http://ltsv.org). LTSV has two inputs that accept a line delimiter or a null delimiter. +- **RFC 5424**: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found [here](https://tools.ietf.org/html/rfc5424). +- **Cap'n'Proto**: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high speed performance. For more information, check out the official website: [Cap'n'Proto](https://capnproto.org/). +- **Beats**: A secure and reliable protocol used by the beats family in the Elasticsearch ecosystem (Ex: [Filebeat](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), [Winlogbeat](https://www.elastic.co/beats/winlogbeat)). ### Mutualized vs Dedicated inputs diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.de-de.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-asia.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-au.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ca.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-gb.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-ie.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-sg.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.en-us.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-es.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.es-us.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-ca.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.fr-fr.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.it-it.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pl-pl.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pl-pl.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pt-pt.md index 588bb12a287..e6c34b4fde6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_opensearch_api_mutualized_input/guide.pt-pt.md @@ -6,11 +6,11 @@ updated: 2024-06-29 ## Overview -OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/){.external}, meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). +OpenSearch is the star component of our platform, making it possible to use [OpenSearch indexes](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) to store your documents. The OpenSearch indexes are quite flexible, but they are not part of the log pipeline. If you want to also use the [Websocket live-tail](/pages/manage_and_operate/observability/logs_data_platform/cli_ldp_tail), or the [Alerting system](/pages/manage_and_operate/observability/logs_data_platform/alerting_stream) or the [Cold Storage](/pages/manage_and_operate/observability/logs_data_platform/archive_cold_storage) feature, and have automatic retention management, then you will need to use the log pipeline. Thanks to our OpenSearch log endpoint, it shall enable you to send logs using the HTTP OpenSearch API. Moreover, the endpoint supports also [OpenSearch Ingest](https://opensearch.org/docs/latest/opensearch/rest-api/ingest-apis/index/), meaning you can use advanced processing on your logs before they are sent in the pipeline. There is no additional cost for this feature, all you need is a [stream](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start). ## OpenSearch endpoint -The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf){.external}. Here is one example of the minimal message you can send: +The OpenSearch endpoint is a dedicated index where you can send a JSON document. The port used is the **9200**, the same HTTP port used for all other OpenSearch APIs of Logs Data Platform. The only fields needed are the **X-OVH-TOKEN** and an extra field (any custom field). Don't hesitate to go to the [Quick Start documentation](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) if you are not familiar with this notion. This document log will be transformed into a valid GELF log and any missing field will be filled automatically. In order to respect the GELF convention, you can also use all the [GELF format reserved fields](https://docs.graylog.org/docs/gelf). Here is one example of the minimal message you can send: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud"}' @@ -21,7 +21,7 @@ Replace the ``, `` and `` with your Logs Data Platf ![simple\_log](images/one_field.png){.thumbnail} The system automatically set the timestamp at the date when the log was received and added the field **test_field** to the log message. Source was set to **unknown** and the message to `-`. -Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf){.external}. Here is another example: +Note that the payload follows the JSON specification (and not the GELF one). The system will still recognize any reserved field used by the [GELF specification](https://docs.graylog.org/docs/gelf). Here is another example: ```shell-session $ curl -H 'Content-Type: application/json' -u ':' -XPOST https://.logs.ovh.com:9200/ldp-logs/_doc -d '{ "X-OVH-TOKEN" : "7f00cc33-1a7a-4464-830f-91be90dcc880" , "test_field" : "OVHcloud" , "short_message" : "Hello OS input", "host" : "OVHcloud_doc" }' @@ -47,9 +47,9 @@ The OpenSearch input will also flatten any sub-object or array sent through it a ## Use case: Vector -[Vector](https://vector.dev/){.external} is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. +[Vector](https://vector.dev/) is a fast and lightweight log forwarder written in Rust. This software is quite similar to [Logstash](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) or [Fluent Bit](/pages/manage_and_operate/observability/logs_data_platform/ingestion_kubernetes_fluent_bit). It takes logs from a source, transforms them and sends them in a format compatible with the configured output module. -The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/){.external} to explore all the possibilities. +The vector integrations are numerous with more than 20 sources supported, more than 25 transforms and 30 sinks supported. It supports OpenSearch as a sink thanks to its Elasticsearch compatibility. We will use the simplest configuration to make it work from a **journald** source to our OpenSearch endpoint. Don't hesitate to check the [documentation](https://vector.dev/docs/about/what-is-vector/) to explore all the possibilities. ```toml data_dir = "/var/lib/vector" # optional, must be allowed in read-write @@ -81,11 +81,11 @@ auth.password = "" Here is the explanation of this configuration. -The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/){.external} source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. +The source part of the TOML configuration file configure the [journald](https://vector.dev/docs/reference/configuration/sources/journald/) source. By default this source will use the `/var/lib/vector` directory to store its data. You can configure this directory with the global option data_dir. -The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/){.external} transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. +The transform configuration part relates to the [remap](https://vector.dev/docs/reference/configuration/transforms/remap/) transform. This transform named here token has for unique goal to add the token stream value. It takes logs from the **inputs** named journald and adds a **X-OVH-TOKEN** value. This token value can be found on the `...`{.action} stream menu on the stream page in the Logs Data Platform manager. Replace **** with the token value of your stream. -The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/){.external}. It takes data from the previous **inputs** token and sets up several config points: +The final part is the [Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/). It takes data from the previous **inputs** token and sets up several config points: - gzip is supported on our endpoint, so it's activated with the **compression** configuration. - **healthcheck** is also supported and allows you to be sure that the platform is alive and well @@ -104,5 +104,5 @@ The logs from journald arrived fully parsed and ready to be explored. Use differ - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.de-de.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.de-de.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-asia.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-asia.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-au.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-au.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ca.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ca.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-gb.md index 26e11bf88e6..2841ccc283c 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-gb.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,6 +143,6 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ie.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-ie.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-sg.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-sg.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-us.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.en-us.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-es.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-es.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-us.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.es-us.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-ca.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-ca.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-fr.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.fr-fr.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.it-it.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.it-it.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pl-pl.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pl-pl.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pt-pt.md index abd2772166d..4dbb8921d16 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_2_djehouty/guide.pt-pt.md @@ -7,14 +7,14 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 2.x. -[Djehouty](https://github.com/ovh/djehouty){.external} is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. +[Djehouty](https://github.com/ovh/djehouty) is intended to be a set of logging formatters and handlers to easily send log entries into Logs Data Platform. This package includes: -- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification){.external}: +- for [GELF](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into GELF(1.1). -- for [LTSV](http://ltsv.org/){.external}: +- for [LTSV](http://ltsv.org/): - a TCP/TLS handler to send log entries over TCP with TLS support. - a formatter to convert logging record into LTSV. @@ -22,8 +22,8 @@ This package includes: To complete this guide you will need: -- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 2, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -32,7 +32,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install Djehouty, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install Djehouty, make sure you have the latest version: ```shell-session $ pip install --upgrade pip @@ -45,7 +45,7 @@ Successfully installed djehouty- setuptools-18.3.1 #### Using sources -Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty){.external} and can be installed manually: +Djehouty is available on the [OVH github repository](https://github.com/ovh/djehouty) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/djehouty.git @@ -77,7 +77,7 @@ The following examples assume that you already have a working stream. Moreover, |static_fields *|`{"X-OVH-TOKEN": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"}`|| |null_character|True|Not Supported| -The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty){.external}. +The complete list of parameters supported by Djehouty can be found on [github](https://github.com/ovh/djehouty). #### Example: Use case with GELF over TCP/TLS @@ -120,7 +120,7 @@ ltsv_logger.info('test') ### Send additional meta data -If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects){.external} class to add extra. +If you have many handler, you can use the [logging.LoggerAdapter](https://docs.python.org/2/library/logging.html#loggeradapter-objects) class to add extra. The following example uses the LTSV logger defined above: @@ -143,5 +143,5 @@ ltsv_logger.info("Bonjour '%s'", 'John', extra={"lang": 'fr'}) - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.de-de.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.de-de.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-asia.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-asia.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-au.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-au.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ca.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ca.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-gb.md index a3fbf3cc4b5..c342fcbb1c7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-gb.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ie.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-ie.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-sg.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-sg.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-us.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.en-us.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-es.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-es.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-us.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.es-us.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-ca.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-ca.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-fr.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.fr-fr.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.it-it.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.it-it.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pl-pl.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pl-pl.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pt-pt.md index 4a15676af89..c8f92522725 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_python_3_logging_ldp/guide.pt-pt.md @@ -7,20 +7,20 @@ updated: 2023-01-16 This guide will show you how to push your logs to Logs Data Platform using Python 3.x. -[logging-ldp](https://github.com/ovh/python-logging-ldp){.external} is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. +[logging-ldp](https://github.com/ovh/python-logging-ldp) is intended to be a high performance logging formatter and handler to send log entries into Logs Data Platform. This package includes: - a TCP/TLS handler to send log entries over TCP with TLS support. -- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}. +- a formatter to convert logging record into [GELF(1.1)](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification). - a facility to ensure fields suits the [LDP naming conventions](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). ## Requirements To complete this guide you will need: -- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/){.external}. -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- Python 3, we recommend to install [pip](https://pip.pypa.io/en/stable/installing/). +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -29,7 +29,7 @@ To complete this guide you will need: #### Using pip -You can use [pip](https://pip.pypa.io/en/stable/){.external} to install logging-ldp, make sure you have the latest version: +You can use [pip](https://pip.pypa.io/en/stable/) to install logging-ldp, make sure you have the latest version: ```shell-session $ pip3 install --upgrade pip @@ -42,7 +42,7 @@ Successfully installed logging-ldp- setuptools-18.3.1 #### Using sources -logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp){.external} and can be installed manually: +logging-ldp is available on the [OVH github repository](https://github.com/ovh/python-logging-ldp) and can be installed manually: ```shell-session $ git clone git@github.com:ovh/python-logging-ldp.git @@ -200,5 +200,5 @@ As we can see: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.de-de.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.de-de.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-asia.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-asia.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-au.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-au.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ca.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ca.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-gb.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-gb.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ie.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-ie.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-sg.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-sg.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-us.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.en-us.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-es.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-es.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-us.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.es-us.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-ca.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-ca.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-fr.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.fr-fr.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.it-it.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.it-it.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pl-pl.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pl-pl.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pt-pt.md index 9cec666c99e..fec39c0a21b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_rust_loggers/guide.pt-pt.md @@ -7,10 +7,10 @@ updated: 2024-08-07 This guide will explain how to push your logs to Logs Data Platform using Rust with two differents libraries. Use the one you prefer. -Rust has a logging implementation ([log](https://docs.rs/log/*/log/){.external}) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification){.external}: +Rust has a logging implementation ([log](https://docs.rs/log/*/log/)) which is widely used. OVHcloud has implemented this system to support the [GELF format](https://go2docs.graylog.org/4-x/getting_in_log_data/gelf.html?tocpath=Getting%20in%20Log%20Data%7CLog%20Sources%7CGELF%7C_____0#GELFPayloadSpecification#gelf-payload-specification): - **gelf_logger**: This is a minimal logger. -- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/){.external}. +- **log4rs-gelf**: Based on _gelf_logger_, this implementation is compatible with the complex configurable framework [log4rs](https://docs.rs/log4rs/*/log4rs/). Those loggers will: @@ -24,10 +24,10 @@ Those loggers will: To complete this guide you will need: - Rust. We recommend the last stable version. -- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [An activated Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) -- To install the [**serde**](https://serde.rs/){.external} crate with the **derive** feature. -- To install the [**log**](https://crates.io/crates/log){.external} crate with the **serde** feature. +- To install the [**serde**](https://serde.rs/) crate with the **derive** feature. +- To install the [**log**](https://crates.io/crates/log) crate with the **serde** feature. ## Instructions @@ -112,11 +112,11 @@ Don't forget to modify the placeholder **** to the clu Don't forget to modify the placeholder **** to the actual value of the write token of your stream. -You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*){.external}. +You could also look at the [generated API documentaton](https://docs.rs/gelf_logger/*). ### Second method: log4rs-gelf -This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/){.external} +This method is an alternative to the previous one. Please consider the following as a different rust project. You need to be familiar with the [log4rs framework](https://docs.rs/log4rs/latest/log4rs/) Install **log4rs** and **log4rs-gelf** in your Rust project. @@ -203,11 +203,11 @@ fn main() { ``` -You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*){.external}. +You could also look at the [generated API documentation](https://docs.rs/log4rs-gelf/*). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.de-de.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.de-de.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-asia.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-asia.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-au.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-au.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ca.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ca.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-gb.md index e1759275a79..33e1a61bda1 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-gb.md @@ -16,7 +16,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -40,7 +40,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -159,5 +159,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ie.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-ie.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-sg.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-sg.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-us.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.en-us.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-es.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-es.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-us.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.es-us.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-ca.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-ca.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-fr.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.fr-fr.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.it-it.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.it-it.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pl-pl.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pl-pl.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pt-pt.md index c8535d7fa9c..ea5fa70adf7 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_syslog_ng/guide.pt-pt.md @@ -15,7 +15,7 @@ In this tutorial will show you how to send Logs from your Linux instance to Logs - A **Linux** based instance (server, VPS, Cloud instance, Raspberry Pi, ...). Command lines will be for **DEBIAN 12** in this tutorial - A root access to this instance -- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29){.external} +- [Activated your Logs Data Platform account](https://www.ovh.com/fr/order/express/#/new/express/resume?products=~%28~%28planCode~%27logs-account~productId~%27logs%29) - [To create at least one Stream and get its token](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## Instructions @@ -39,7 +39,7 @@ Conclusion: lot of info, with a date, a process, a description. but hard to foll ### Configure your Account -First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)){.external}, a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! +First thing to do is to configure your Logs Data Platform account: [create your user](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs)), a stream and a dashboard. Verify that everything works already perfectly. We wrote an independent guide for this, please read it and come back here after : [Quick start.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) Read it? Let's go to the next step then ! ### Install and configure a log collector @@ -158,5 +158,5 @@ The best feature is the ability to mix criteria, based on what is important to y - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.de-de.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.de-de.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-asia.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-asia.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-au.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-au.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ca.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ca.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-gb.md index 3295516956c..864e5d8a051 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-gb.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software: [NXLog](http://nxlog.co){.external}. NXLog is one of the leaders of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software: [NXLog](http://nxlog.co). NXLog is one of the leaders of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceeding. Once you have it, install it on your system. By default, the program will install itself in **C:\\Program Files\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceeding. Once you have it, install it on your system. By default, the program will install itself in **C:\\Program Files\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to try the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to try the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ie.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-ie.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-sg.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-sg.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-us.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.en-us.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-es.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-es.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-us.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.es-us.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-ca.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-ca.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-fr.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.fr-fr.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.it-it.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.it-it.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pl-pl.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pl-pl.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pt-pt.md index 7de38366625..5183c8ca43b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/ingestion_windows_nxlog/guide.pt-pt.md @@ -5,18 +5,18 @@ updated: 2023-01-16 ## Objective -At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co){.external}. NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. +At OVHcloud, we love Microsoft products too. So it is important for us to provide you a way to send your Windows Logs to Logs Data Platform. All you need is 15 minutes and one software : [NXLog](http://nxlog.co). NXLog is one of the leader of the log management tools. Its configuration is fairly simple and can get you started in a few minutes. ## Requirements For this tutorial you will need to have completed the following steps : -- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- [Activated your Logs Data Platform account.](https://www.ovh.com/fr/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) - [To create at least one Stream and get its token.](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) ## NXLog -You can find NXLog, at its official website [nxlog.co](http://nxlog.co){.external}. Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. +You can find NXLog, at its official website [nxlog.co](http://nxlog.co). Please go to the official website and download the latest version for Windows (2.10.2150 at the time of writing). Be sure to have Administrator rights before proceding. Once you have it, install it on your system. By default the program will install itself in **C:\\Program Files(x86)\\nxlog\\**. Navigate to this folder to edit the configuration file **nxlog.conf** present in the folder **conf**. ## Configuration @@ -108,11 +108,11 @@ Jump to Graylog (use the Graylog access button in the Manager) and to the stream I think that's pretty much it. I know, it didn't even take 10 minutes :-). -If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html){.external} +If you want to go further, don't hesitate to fly to the [NXlog documentation](https://docs.nxlog.co/userguide/documentation.html) ## Getting Help - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.de-de.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.de-de.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-asia.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-asia.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-au.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-au.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ca.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ca.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-gb.md index 2e69fba934f..846d4325539 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-gb.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ie.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-ie.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-sg.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-sg.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-us.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.en-us.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-es.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-es.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-us.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.es-us.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-ca.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-ca.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-fr.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.fr-fr.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.it-it.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.it-it.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pl-pl.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pl-pl.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pt-pt.md index 7796031ea76..316dc6af2c6 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/integration_opensearch_api/guide.pt-pt.md @@ -81,5 +81,5 @@ Like most features of Logs Data Platform, aliases can be shared with other Logs - [Introduction to Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_introduction_to_LDP) - [Getting Started with Logs Data Platform](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - [Our documentation](/products/public-cloud-data-platforms-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/introduction_to_services_logs/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/introduction_to_services_logs/guide.en-gb.md index d07417d82b4..b3c6a7ecbe9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/introduction_to_services_logs/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/introduction_to_services_logs/guide.en-gb.md @@ -209,6 +209,6 @@ This means that: - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))){.external} +- Create an account: [Try it!](https://www.ovh.com/en/order/express/#/express/review?products=~(~(planCode~'logs-account~productId~'logs))) Join our [community of users](/links/community). diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.de-de.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.de-de.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-asia.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-asia.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-au.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-au.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ca.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ca.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-gb.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-gb.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ie.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-ie.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-sg.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-sg.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-us.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.en-us.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-es.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-es.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-us.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.es-us.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-ca.md index d21c16b3a0b..3bc409d6bc9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-ca.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-fr.md index d21c16b3a0b..3bc409d6bc9 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.fr-fr.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.it-it.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.it-it.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pl-pl.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pl-pl.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pt-pt.md index ab7e3e174c8..59deae3fe93 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/opensearch_index/guide.pt-pt.md @@ -36,7 +36,7 @@ You must just choose a suffix for your index. The final name will follow this co For each index, you can specify the number of **shards**. A **shard** is the main component of an **index**. Its maximum storage capacity is set to **25 GB** (per shard). Multiple shards means more volume, more parallelism in your requests and thus more performance. Optionally, you can also be notified when your index is close to its critical size. Once your index is created, you can use it right away. -When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/){.external}, you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. +When you create an index through the [OpenSearch API](https://opensearch.org/docs/latest/opensearch/index-data/), you can also specify the number of shards. Note that the maximum number of shards by index is limited to **16**. OpenSearch compatible tools can now create indices on the cluster as long as they follow the naming convention `logs--i-`. Here is an example with a curl command with the user **logs-ab-12345** and the index **logs-ab-12345-i-another-index** on gra2 cluster. ```shell-session $ curl -u logs-ab-12345:mypassword -XPUT -H 'Content-Type: application/json' 'https://gra2.logs.ovh.com:9200/logs-ab-12345-i-another-index' -d '{ "settings" : {"number_of_shards" : 1}}' @@ -48,7 +48,7 @@ Whatever method you use, you will be able to query and visualize your documents #### Index some data -Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/){.external}. Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. +Logs Data Platform OpenSearch indices are compatible with the [OpenSearch REST API](https://opensearch.org/docs/latest/opensearch/rest-api/index/). Therefore, you can use simple http requests to index and search your data. The API is accessible behind a secured https endpoint with mandatory authentication. We recommend that you use [tokens](/pages/manage_and_operate/observability/logs_data_platform/security_tokens) to authenticate yourself. You can retrieve the endpoint of the API at the **Home** page of your service. Here is a simple example to index a document with curl with an index on the cluster `.logs.ovh.com`. ```shell-session $ curl -u token: -XPUT -H 'Content-Type: application/json' 'https://.logs.ovh.com:9200/logs--i-/_doc/1' -d '{ "user" : "Oles", "company" : "OVH", "message" : "Hello World !", "post_date" : "1999-11-02T23:01:00" }' @@ -91,7 +91,7 @@ $ curl -XGET -u token: 'https://.logs.ovh.com:920 {"_id":"1","_index":"logs--i-","_primary_term":1,"_seq_no":0,"_source":{"company":"OVH","message":"Hello World !","post_date":"1999-11-02T23:01:00","user":"Oles"},"_type":"_doc","_version":1,"found":true} ``` -To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} or a URI search. Here is a simple example with an URI search: +To issue a simple search you can either use the [Query DSL](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) or a URI search. Here is a simple example with an URI search: ```shell-session $ curl -XGET -u token: 'https://.logs.ovh.com:9200/logs--i-/_search?q=user:Oles' @@ -429,7 +429,7 @@ $ curl -u : -XPUT -H 'Content-Type: application/json' 'htt - The **PUT** HTTP command can be used to create or modify a document. - The **-H 'Content-Type: application/json'** option is the mandatory header to indicate that the data will be in the json format. - The address contains the endpoint of the cluster followed by the **name of your index** -- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/){.external}: the number of shards (the number of replicas will be automatically set at 1). +- The payload of the request is a **JSON document** which contains the [settings of your index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/): the number of shards (the number of replicas will be automatically set at 1). You have to follow the Logs Data Platform naming convention `-i-` to create your index. Your username is the one you use to connect to Graylog or to use the API. The suffix can contain any alphanumeric character. @@ -512,11 +512,11 @@ Index as a service has some specificities on our platforms. This additional and - You are not allowed to change the settings of your index. - You can create an **alias** on Logs Data Platform and attach it to one or several indices. - Unlike indices, aliases are **read-only**, you cannot write through an alias yet. -- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +- If there is a feature missing, feel free to contact us on the [community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.de-de.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.de-de.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-asia.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-asia.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-au.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-au.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ca.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ca.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-gb.md index a6b5d1dd526..e3035ddb68c 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-gb.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,6 +156,6 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ie.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-ie.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-sg.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-sg.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-us.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.en-us.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-es.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-es.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-us.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.es-us.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-ca.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-ca.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-fr.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.fr-fr.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.it-it.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.it-it.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pl-pl.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pl-pl.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pt-pt.md index 6ba7c8853f4..986e2a548b8 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/security_tokens/guide.pt-pt.md @@ -8,11 +8,11 @@ updated: 2023-06-02 With Logs Data Platform, there are 3 ways to query your logs. -- The [Graylog Web Interface](https://gra1.logs.ovh.com){.external} -- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative){.external} -- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/){.external} located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). +- The [Graylog Web Interface](https://gra1.logs.ovh.com) +- The [Graylog API](https://gra1.logs.ovh.com/api/api-browser/global/index.html#!/search47universal47relative/searchRelative) +- The [OpenSearch API](https://opensearch.org/docs/latest/opensearch/query-dsl/index/) located at the port 9200 of your cluster (find its address in the **Home** Page) against your [alias](/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards). -So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard){.external}. +So you can pop up a [Grafana](/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana) or even [a terminal Dashboard for Graylog](https://github.com/Graylog2/cli-dashboard). All these accesses are secured by your username and password. But what if you don't want to put your Logs Data Platform credentials everywhere? You can just use tokens to access all these endpoints and revoke them anytime you want. This tutorial is here to tell you how. @@ -156,5 +156,5 @@ The only place you cannot use your token is the Graylog Web Interface. - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.de-de.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.de-de.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-asia.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-asia.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-au.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-au.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ca.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ca.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-gb.md index 744b4729e46..d1a0df9c6ce 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-gb.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and coverss everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and coverss everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ie.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-ie.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-sg.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-sg.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-us.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.en-us.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-es.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-es.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-us.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.es-us.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-ca.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-ca.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-fr.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.fr-fr.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.it-it.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.it-it.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pl-pl.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pl-pl.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pt-pt.md index 445ef1d4002..6557d5d70f4 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_haproxy/guide.pt-pt.md @@ -6,7 +6,7 @@ updated: 2020-07-27 ## Objective -[HAProxy](http://www.haproxy.org/){.external} is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. +[HAProxy](http://www.haproxy.org/) is the de-facto standard load balancer for your TCP and HTTP based applications. This French software provides high availability, load balancing, and proxying with high performance, unprecedented reliability and a very fair price (it's completely free and open-source). It is used by the world's most visited web sites and is also heavily used internally at OVHcloud and in some of our products. HAProxy has a lot of features and because it is located between your infrastructure and your clients, it can give you a lot of information about either of them. Logs Data Platform helps you to exploit this data and can answer a lot of your questions: @@ -18,7 +18,7 @@ HAProxy has a lot of features and because it is located between your infrastruct - How long do your clients stay on your websites? - Are all of your back-end servers healthy? -This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/){.external} to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/){.external}. +This guide will show you two ways to forward your HAProxy logs to the Logs Data Platform. Both ways will use [rsyslog](http://www.rsyslog.com/) to send logs. The first configuration will leverage Logstash parsing capabilities, and the second will use the custom log format feature of HAProxy to send logs using the [LTSV Format](http://ltsv.org/). ## Requirements @@ -32,7 +32,7 @@ For this tutorial, you should have read the following ones to fully understand w ### HAProxy: -HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt){.external} is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs){.external}. Depending on your backend you have the choice between several formats for your logs: +HAProxy is a powerful software with many configuration options available. Fortunately the [configuration documentation](http://www.haproxy.org/download/1.9/doc/configuration.txt) is very complete and covers everything you need to know for this tutorial. This tutorial is not a HAProxy tutorial so it will not cover how to install, configure and deploy HAProxy but you will find material on the matter [on the official website](http://www.haproxy.org/#docs). Depending on your backend you have the choice between several formats for your logs: - **Default format**: Despite giving some information about the client and the destination, this format is not really verbose and cannot really be used for any deep analysis. - **Tcp Log format**: This format gives you much more information for troubleshooting your tcp connections and is the one you should use when you have no idea what type of application is started behind your backend. @@ -45,7 +45,7 @@ Here is an example of a log line with the HTTP log format : haproxy[14389]: 5.196.2.38:39527 [03/Nov/2015:06:25:25.105] services~ api/api 4599/0/0/428/5027 304 320 - - ---- 1/1/0/1/0 0/0 "GET /v1/service HTTP/1.1" ``` -Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt){.external} to have a detailed description on all these formats and the available fields. +Every block of this line (including the dashes characters) gives one piece of information about the terminated connection. On this single line you have information about the process, its pid, the client ip, the client port, the date of the opening of the connection, the frontend, backend and server names, timers in milliseconds waiting for the client, process buffers, and server, the status code, the number of bytes read, the cookies information, the termination state, the number of concurrent connection respectively on the process, the frontend, the backend and the servers, the number of retries, the backend queue number and finally the request itself. You can visit the chapter 8 [on HAProxy Documentation](http://www.haproxy.org/download/2.3/doc/configuration.txt) to have a detailed description on all these formats and the available fields. To activate the logging on HAProxy you must set a global **log** option on the **/etc/haproxy/haproxy.cfg**. @@ -81,9 +81,9 @@ We can send logs to Logs Data Platform by using several softwares. One of them i ### Rsyslog: -[Rsyslog](http://www.rsyslog.com){.external} is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/){.external} for detailed information. +[Rsyslog](http://www.rsyslog.com) is a fast log processor fully compatible with the syslog protocol. It has evolved into a generic collector able to accept entries from a lot of different inputs, transform them and finally send them to various destinations. Installation and configuration documentation can be found at the official website. Head to [http://www.rsyslog.com/doc/v8-stable/](http://www.rsyslog.com/doc/v8-stable/) for detailed information. -To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org){.external}. The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). +To send HAProxy logs with RSyslog, we will use several methods: a [dedicated Logstash collector](/pages/manage_and_operate/observability/logs_data_platform/ingestion_logstash_dedicated_input) and the plain [LTSV format](http://ltsv.org). The first method is the least intrusive and can be used when you need Logstash processing of your logs (for example to anonymize some logs under some conditions). The second method should be preferred when you have a high traffic website (at least 1000 requests by second.). For both methods you will need our SSL certificate to enable TLS communication. Some Debian Linux distributions need you to install the package **rsyslog-gnutls** to enable SSL. @@ -95,7 +95,7 @@ Once you have activated the tcp or http logs of your HAProxy instance, you must #### Logstash collector configuration -As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html){.external} to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html){.external} and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. +As you may guess we have to configure the Logstash collector with some clever [Grok filters](https://www.elastic.co/guide/en/logstash/6.7/plugins-filters-grok.html) to make the collector be aware of our [field naming convention](/pages/manage_and_operate/observability/logs_data_platform/getting_started_field_naming_convention). The collector will accept logs in a generic [TCP input](https://www.elastic.co/guide/en/logstash/7.x/plugins-inputs-tcp.html) and use grok filters to extract the information. Thanks to the wizard feature, you won't even need to copy and paste the following configuration snippets, but they are still given for reference purpose. Here is the Logstash input configuration: @@ -156,7 +156,7 @@ This configuration should be familiar, we set the port, the ssl parameter and th } ``` -The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601){.external} format we use for date parsing. This filter is only executed when one of the previous filter was successful. +The filter is divided in 3+1 parts. The first 3 parts are grok filters that try to parse the different format. If failing (with a **_grokparsefailure** tag), it tries another log format. HTTP, TCP and the error log format are the one tried. The last part is a date filter. This filter is used to translate the dates to the correct [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format we use for date parsing. This filter is only executed when one of the previous filter was successful. ```ruby ### HA PROXY ### @@ -204,7 +204,7 @@ For the first action you will need the collector certificate and its hostname, y ![collector\_menu](images/collector_info.png){.thumbnail} -Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html){.external} to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. +Copy the certificate in a file **logstash.pem** and copy the hostname and your port. Depending of your flavor of rsylog and HAProxy, your configuration file may be already present at a particular location. If you do not have any HAProxy related file in the directory **/etc/rsyslog.d/**, create a new file in this directory. If the directory does not exist , simply edit the **/etc/rsyslog.conf** file. Don't hesitate to review [the rsyslog documentation](http://www.rsyslog.com/doc/master/configuration/index.html) to have more information. On Debian flavors for example, if you used the rsyslog and HAProxy packages you may have a file located in **/etc/rsyslog.d/46-haproxy.conf**. In that case, you should prefer editing this file. ```text $AddUnixListenSocket /var/lib/haproxy/dev/log @@ -227,11 +227,11 @@ The important settings here are the **logstash.pem** path location, **activation ### Use the high performance LTSV format -You can use the high performance [LTSV format](http://ltsv.org){.external} with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger){.external} on Logs Data Platform to have even more security and performance. +You can use the high performance [LTSV format](http://ltsv.org) with HAProxy by using a custom format. This option is best suited for high traffic websites and is highly customisable. You can remove fields that you don't need in your logs or add some optional ones (like SSL ciphers and version used in the connection, client port, request counter...). To configure it you will need to specify your format in the HAProxy configuration file and then configure your rsyslog configuration to enclose the log line into a compatible LTSV log line. Moreover you can spawn your own high-performance collector with [Flowgger](https://github.com/jedisct1/flowgger) on Logs Data Platform to have even more security and performance. #### HAProxy log format configuration -The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt){.external} (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: +The flags used to define your log format are described in the [HAProxy documentation](http://www.haproxy.org/download/1.8/doc/configuration.txt) (section 8.2.4 in the version 1.8 of HAProxy). Here is an example of a log format that is fully compatible with our field naming convention. In place of your previous log option, use the following entry: ```text log-format client_ip:%ci\tclient_port_int:%cp\tdate_time:%t\tfrontend_name:%ft\tbackend_name:%b\tserver_name:%s\ttime_request_int:%Tq\ttime_queue_int:%Tw\ttime_backend_connect_int:%Tc\ttime_backend_response_int:%Tr\ttime_duration_int:%Tt\thttp_status_code_int:%ST\tbytes_read_int:%B\tcaptured_request_cookie:%CC\tcaptured_response_cookie:%CS\ttermination_state:%tsc\tactconn_int:%ac\tfeconn_int:%fc\tbeconn_int:%bc\tsrvconn_int:%sc\tretries_int:%rc\tsrv_queue_int:%sq\tbackend_queue_int:%bq\tcaptured_request_headers:%hr\tcaptured_response_headers:%hs\thttp_request:%r\tmessage:%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r @@ -292,7 +292,7 @@ In this configuration, we added some $Action directives to have a more robust co ### Filebeat -[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external} and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. +[Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss) and its HAProxy module allow you to bypass the log formatting step entirely. You will still need RSyslog or any equivalent software to retrieve the logs from HAProxy. On Debian/Ubuntu, the HAProxy package will also setup the rsyslog configuration file at the following path **/etc/rsyslog.d/49-haproxy.conf**. You may have to restart Rsyslog to see logs appearing in the default path **/var/log/haproxy.log**. After you have downloaded filebeat, you need to enable the HAProxy module by running the following command: @@ -349,5 +349,5 @@ Here is an example of a dashboard that you can craft from the HAProxy logs. HAPr - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.de-de.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.de-de.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-asia.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-asia.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-au.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-au.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ca.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ca.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-gb.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-gb.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ie.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-ie.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-sg.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-sg.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-us.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.en-us.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-es.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-es.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-us.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.es-us.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-ca.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-ca.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-fr.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.fr-fr.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.it-it.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.it-it.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pl-pl.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pl-pl.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pt-pt.md index bc9464d5e74..089721933e3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_mysql_slow_queries/guide.pt-pt.md @@ -23,7 +23,7 @@ Before, you must read these three guides: ### Configure the MySQL slow query logs To send your logs to Logs Data Platform you first need to activate the slow query logs in your MySQL configuration. -We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/){.external} for your own version of MySQL. For example here is a working configuration on MySQL 5.6: +We recommend you refer to the official [MySQL documentation](http://dev.mysql.com/doc/) for your own version of MySQL. For example here is a working configuration on MySQL 5.6: ```ini # Here you can see queries with especially long duration @@ -69,8 +69,8 @@ Slow query logs are multi-line logs giving information: ### Configure Filebeat on your system -Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss){.external}. -We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat){.external}. Here is a minimal **filebeat.yml** configuration file. +Our favorite way to send MySQL slow query logs is to send logs directly to Logs Data Platform by using [Filebeat](https://www.elastic.co/fr/downloads/beats/filebeat-oss). +We cover Filebeat in depth in [another tutorial](/pages/manage_and_operate/observability/logs_data_platform/ingestion_filebeat). Here is a minimal **filebeat.yml** configuration file. ```yaml #=========================== Filebeat inputs ============================= @@ -166,7 +166,7 @@ $ ldp@ubuntu:~$ sudo /etc/init.d/filebeat restart depending on your distribution. -Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db){.external} and use join and like queries. Alternatively, you can use the MySQL Sleep query: +Try to run some slow queries in your database. For this you can use this [database sample](https://github.com/datacharmer/test_db) and use join and like queries. Alternatively, you can use the MySQL Sleep query: ``` SELECT SLEEP(2); @@ -190,5 +190,5 @@ All this information can help you to analyse the most difficult queries for your - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.de-de.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.de-de.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-asia.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-asia.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-au.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-au.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ca.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ca.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-gb.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-gb.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ie.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-ie.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-sg.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-sg.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-us.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.en-us.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-es.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-es.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-us.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.es-us.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-ca.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-ca.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-fr.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.fr-fr.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.it-it.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.it-it.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pl-pl.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pl-pl.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pt-pt.md index 1dff2719aea..95d5613e3d3 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/usecase_twitter/guide.pt-pt.md @@ -24,7 +24,7 @@ If you have completely understood these three guides, let's dive into this one. #### Twitter application creation -Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/){.external} and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: +Logstash has a powerful Twitter Input plugin. This plugin allows you to connect to the Stream API of Twitter and to listen for incoming tweets. In order to do this, it just needs your Twitter account API Keys. They are free and can be retrieved with a Twitter account. Log in your Twitter account in: [https://apps.twitter.com/](https://apps.twitter.com/) and click on create a new app. Fill a name, a description and a website for your project. Read and agree to the Twitter Developer Agreement to proceed. You will then arrive on the application webpage: ![twitter_app](images/twitter-app.png){.thumbnail} @@ -66,7 +66,7 @@ twitter { Fill the consumer Keys and Secret with the keys you obtained at the Twitter app configuration step. The oauth_token and the oauth_token_secret are the Access Token and Access Token Secret you created just before. -The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html){.external}. +The keywords array is the special array where you can specify which keywords you want to follow. Here I want to follow the three different competitors of the famous #ConsoleWars. If you want to follow tweets that contain multiple terms simultaneously you just separate them by a space in the same string. For example: "call of duty" will follow only tweets that contain 'call', 'of' and 'Duty'. You can also just follow a specific Twitter account by using the option **follows**. For more information about the Twitter input, go to the complete [Twitter input documentation](https://www.elastic.co/guide/en/logstash/6.7/plugins-inputs-twitter.html). You must use two additional parameters: @@ -176,7 +176,7 @@ if [type] == "mention" { The configuration looks quite long and complex, it is in fact split into three parts: the *tweet type section*, the *hashtag type section* and the *mention type section* -- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html){.external} to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. +- The tweet type section: In this section, we select all the objects that have the *tweet* type. We use the *mutate filter* to extract and move some information at the top level of the event. We also remove unneeded information as id_str or timestamp_ms. Then we use [conditional expressions](https://www.elastic.co/guide/en/logstash/6.7/event-dependent-configuration.html) to extract information and to create hashtags and mentions objects. The *clone filters* will create a new event that will contain a copy of the full tweet and will tag it as a hashtag or mention type. They will execute only if mentions or hashtags are present. - The hashtag type section: In this section, the hashtags of a tweet will be split into distinct events so a tweet that has 4 hashtags will generate 4 events of type hashtag. That's the purpose of the *split filter*. After the split filter, there is a mutate filter that will promote some information at the top level of the event and remove unnecessary information for this type of object. It will also change the message to the hashtag text itself with the preceding 'hash' character. - The mention type section: It is pretty much the same as the hashtag one. One *split filter* to create mentions events and one *mutate filter* to extract, delete and modify useful information. @@ -254,11 +254,11 @@ There are many more possibilities. Of course you can create beautiful dashboards ![Dashboards](images/dashboard.png){.thumbnail} -That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms){.external}. +That's all for now. If you have any proposition or trouble with this tutorial, don't hesitate to reach us on the [Community hub](https://community.ovh.com/en/c/Platform/data-platforms). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.de-de.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.de-de.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-asia.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-asia.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-au.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-au.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ca.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ca.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-gb.md index e0883481993..aed250f647b 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-gb.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -64,11 +64,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ ![Data source 2](images/datasource_2.png){.thumbnail} To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ie.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-ie.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-sg.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-sg.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-us.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.en-us.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-es.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-es.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-us.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.es-us.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-ca.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-ca.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-fr.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.fr-fr.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.it-it.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.it-it.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pl-pl.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pl-pl.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pt-pt.md index 4e398e0004b..e36899ea883 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_grafana/guide.pt-pt.md @@ -5,7 +5,7 @@ updated: 2024-11-28 ## Objective -[Grafana](http://grafana.org/){.external} provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. +[Grafana](http://grafana.org/) provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world. Since release 7, Grafana is able to communicate with OpenSearch and so allow you to mix data from Logs Data Platform and other data sources like IoT in the same place. This guide will show you how to achieve this. ## Requirements @@ -38,12 +38,12 @@ So here you go, now Logs Data Platform knows what stream you want to browse. Now ### Setup your own grafana -Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/){.external} (v9.0.0 at the time of writing). -Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/){.external} +Get the latest Grafana release here: [http://grafana.org/download/](http://grafana.org/download/) (v9.0.0 at the time of writing). +Then follow the Grafana installation guide according to your platform: [http://docs.grafana.org/installation/](http://docs.grafana.org/installation/) ### Launch it! -If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000){.external} Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: +If everything is setup properly, launch your favorite browser, and point it to [http://localhost:3000](http://localhost:3000) Once logged in with your grafana credentials, reach data sources panel to setup your Logs Data Platform datasource: ![Data source_1](images/datasource_1.png){.thumbnail} @@ -65,11 +65,11 @@ If your configuration is correct, it should display: " _Index Ok. Timefield Ok._ To explore further, you can create a new dashboard and add different styles of visualizations. -If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/){.external}. +If you want to know what you can do with Grafana and OpenSearch, read the [official documentation](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.de-de.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.de-de.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.de-de.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.de-de.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-asia.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-asia.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-asia.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-asia.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-au.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-au.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-au.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-au.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ca.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ca.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ca.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-gb.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-gb.md index 138f50ba7f1..4bbc2ec3c7c 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-gb.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-gb.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,12 +54,12 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ie.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ie.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ie.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-ie.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-sg.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-sg.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-sg.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-sg.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-us.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-us.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.en-us.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-es.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-es.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-es.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-es.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-us.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-us.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-us.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.es-us.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-ca.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-ca.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-ca.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-ca.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-fr.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-fr.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-fr.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.fr-fr.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.it-it.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.it-it.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.it-it.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.it-it.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pl-pl.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pl-pl.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pl-pl.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pl-pl.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp) diff --git a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pt-pt.md b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pt-pt.md index 7846da261ae..eb221ad283e 100644 --- a/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pt-pt.md +++ b/pages/manage_and_operate/observability/logs_data_platform/visualization_opensearch_dashboards/guide.pt-pt.md @@ -5,7 +5,7 @@ updated: 2022-06-13 ## Objective -This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/){.external} and craft some beautiful Dashboards from your logs. +This guide will help you unleash the full power of [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) and craft some beautiful Dashboards from your logs. ## Requirements @@ -54,11 +54,11 @@ In this configuration page, you can configure as the Index name, the full name o You can also explore any [OpenSearch index](/pages/manage_and_operate/observability/logs_data_platform/opensearch_index) you created on the platform. One OpenSearch Dashboards instance allows you to explore all the data you delivered on Logs Data Platform. -To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/){.external} +To know what you can do with OpenSearch Dashboards, read the [OpenSearch Dashboards documentation](https://opensearch.org/docs/latest/dashboards/index/) ## Go further - Getting Started: [Quick Start](/pages/manage_and_operate/observability/logs_data_platform/getting_started_quick_start) - Documentation: [Guides](/products/observability-logs-data-platform) -- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms){.external} +- Community hub: [https://community.ovh.com](https://community.ovh.com/en/c/Platform/data-platforms) - Create an account: [Try it!](/links/manage-operate/ldp)